Sample records for processes hydrol process

  1. Conversion of SPORL pretreated Douglas fir forest residues into microbial lipids with oleaginous yeasts

    USDA-ARS?s Scientific Manuscript database

    Douglas fir is the dominant commercial tree grown in the United States. In this study Douglas fir residue was converted to single cell oils using oleaginous yeasts. Monosaccharides were extracted from the woody biomass by pretreating with sulfite and dilute sulfuric acid (SPORL process) and hydrol...

  2. Hydrolates from lavender (Lavandula angustifolia)--their chemical composition as well as aromatic, antimicrobial and antioxidant properties.

    PubMed

    Prusinowska, Renata; Śmigielski, Krzysztof; Stobiecka, Agnieszka; Kunicka-Styczyńska, Alina

    2016-01-01

    It was shown that the method for obtaining hydrolates from lavender (Lavandula angustifolia) influences the content of active compounds and the aromatic, antimicrobial and antioxidant properties of the hydrolates. The content of volatile organic compounds ranged from 9.12 to 97.23 mg/100 mL of hydrolate. Lavender hydrolate variants showed low antimicrobial activity (from 0% to 0.05%). The radical scavenging activity of DPPH was from 3.6 ± 0.5% to 3.8 ± 0.6% and oxygen radical absorbance capacity (ORAC(FL)) results were from 0 to 266 μM Trolox equivalent, depending on the hydrolate variant.

  3. Production of mullite fibers

    NASA Technical Reports Server (NTRS)

    Tucker, Dennis S. (Inventor); Sparks, J. Scott (Inventor)

    1991-01-01

    Disclosed here is a process for making mullite fibers wherein a hydrolizable silicon compound and an aluminum compound in the form of a difunctional aluminum chelate are hydrolized to form sols using water and an alcohol with a catalytic amount of hydrochloric acid. The sols are mixed in a molar ratio of aluminum to silicon of 3 to 1 and, under polycondensation conditions, a fibrous gel is formed. From this gel the mullite fibers can be produced.

  4. R-HyMOD: an R-package for the hydrological model HyMOD

    NASA Astrophysics Data System (ADS)

    Baratti, Emanuele; Montanari, Alberto

    2015-04-01

    A software code for the implementation of the HyMOD hydrological model [1] is presented. HyMOD is a conceptual lumped rainfall-runoff model that is based on the probability-distributed soil storage capacity principle introduced by R. J. Moore 1985 [2]. The general idea behind this model is to describe the spatial variability of some process parameters as, for instance, the soil structure or the water storage capacities, through probability distribution functions. In HyMOD, the rainfall-runoff process is represented through a nonlinear tank connected with three identical linear tanks in parallel representing the surface flow and a slow-flow tank representing groundwater flow. The model requires the optimization of five parameters: Cmax (the maximum storage capacity within the watershed), β (the degree of spatial variability of the soil moisture capacity within the watershed), α (a factor for partitioning the flow between two series of tanks) and the two residence time parameters of quick-flow and slow-flow tanks, kquick and kslow respectively. Given its relatively simplicity but robustness, the model is widely used in the literature. The input data consist of precipitation and potential evapotranspiration at the given time scale. The R-HyMOD package is composed by a 'canonical' R-function of HyMOD and a fast FORTRAN implementation. The first one can be easily modified and can be used, for instance, for educational purposes; the second part combines the R user friendly interface with a fast processing unit. [1] Boyle D.P. (2000), Multicriteria calibration of hydrological models, Ph.D. dissertation, Dep. of Hydrol. and Water Resour., Univ of Arizona, Tucson. [2] Moore, R.J., (1985), The probability-distributed principle and runoff production at point and basin scale, Hydrol. Sci. J., 30(2), 273-297.

  5. Global Palaeoclimate Signals in Climate in groundwater: the past is the key to the future

    NASA Astrophysics Data System (ADS)

    van der Ploeg, M. J.; Cendon, D. I.; Haldorsen, S.; Chen, J.; Gurdak, J. J.; Tujchneider, O.; Vaikmae, R.; Purtschert, R.; Chkir Ben Jemâa, N.

    2013-12-01

    The impact of climate variability and groundwater extraction on the resilience of groundwater systems is still not fully understood (Green et al. 2011). Groundwater stores environmental and climatic information acquired during the recharge process, which integrates different signals, like recharge temperature, origin of precipitation, and dissolved constituents. This information can be used to estimate palaeo recharge temperatures, palaeo atmospheric dynamics and residence time of groundwater within the aquifer (Stute et al. 1995, Clark and Fritz 1997, Collon et al. 2000, Edmunds et al. 2003, Cartwright et al. 2007, Kreuzer et al. 2009, Currell et al. 2010, Raidla et al. 2012, Salem et al. 2012). The climatic signals incorporated by groundwater during recharge have the potential to provide a regionally integrated proxy of climatic variations at the time of recharge. Groundwater palaeoclimate information is affected by diffusion-dispersion processes (Davison and Airey, 1982) and/or water-rock interaction (Clark and Fritz, 1997), making palaeoclimate information deduced from groundwater inherently a low resolution record. While the signal resolution can be limited, recharge follows major climatic events, and more importantly, shows how those aquifers and their associated recharge varies under climatic forcing. While the characterization of groundwater resources, surface-groundwater interactions and their link to the global water cycle are an important focus, little attention has been given to groundwater as a potential record of past climate variations. A groundwater system's history is vital to forecast its vulnerability under future and potentially adverse climatic changes. By processing groundwater information from vast regions and different continents, recharge and palaeoclimate can be correlated at a global scale. To address the identified lack of palaeoclimatic data available from groundwater studies, a global collaboration has been set-up in 2011 called Groundwater@Global Palaeoclimate Signals (www.gw-gps.com), and has already more than 70 participants from 5 continents. Since 2012 G@GPS receives seed funding to support meetings by IGCP, INQUA and UNESCO-GRAPHIC. This collaboration targets groundwater basins on five continents --Africa, America, Asia, Australia, Europe -- containing vast groundwater resources with an estimated dependence of tens of millions of people. We will present G@GPS, show examples from groundwater basins, and discuss possibilities to integrate groundwater information from these basins. References Cartwright, I. et al. 2007. J. Hydrol. 332: 69-92. Clark, I. and P. Fritz. 1997. Lewis Publishers. Collon, P. et al. 2000. Earth and Planetary Science Letters 182: 103-113. Currell, M. J. et al. 2010. J. Hydrol. 385: 216-229. Davison, M. R. and P. L. Airey. 1982. J. Hydrol. 58: 131-147. Edmunds, W. M. et al. 2003. Applied Geochemistry 18: 805-822. Green, T.R. et al. 2011. J. Hydrol 405: 532-560. Kreuzer, A. M. et al. 2009. Chemical Geology 259: 168-180. Raidla, V. et al. 2012, Applied Geochemistry, v. 27(10), p. 2042-2052. Salem, S.B.H. et al. 2012, Environmental Earth Sciences, v., 66, p. 1099-1110. Stute M., et al. 1995. Science 269, 379-383.

  6. L-Lactic acid production by combined utilization of agricultural bioresources as renewable and economical substrates through batch and repeated-batch fermentation of Enterococcus faecalis RKY1.

    PubMed

    Reddy, Lebaka Veeranjaneya; Kim, Young-Min; Yun, Jong-Sun; Ryu, Hwa-Won; Wee, Young-Jung

    2016-06-01

    Enterococcus faecalis RKY1 was used to produce l-lactic acid from hydrol, soybean curd residues (SCR), and malt. Hydrol was efficiently metabolized to l-lactic acid with optical purity of >97.5%, though hydrol contained mixed sugars such as glucose, maltose, maltotriose, and maltodextrin. Combined utilization of hydrol, SCR, and malt was enough to sustain lactic acid fermentation by E. faecalis RKY1. In order to reduce the amount of nitrogen sources and product inhibition, cell-recycle repeated-batch fermentation was employed, where a high cell mass (26.3g/L) was obtained. Lactic acid productivity was improved by removal of lactic acid from fermentation broth by membrane filtration and by linearly increased cell density. When the total of 10 repeated-batch fermentations were carried out using 100g/L hydrol, 150g/L SCR hydrolyzate, and 20g/L malt hydrolyzate as the main nutrients, lactic acid productivity was increased significantly from 3.20g/L/h to 6.37g/L/h. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. The Isolation of Nanofibre Cellulose from Oil Palm Empty Fruit Bunch Via Steam Explosion and Hydrolysis with HCl 10%

    NASA Astrophysics Data System (ADS)

    Gea, S.; Zulfahmi, Z.; Yunus, D.; Andriayani, A.; Hutapea, Y. A.

    2018-03-01

    Cellulose nanofibrils were obtained from oil palm empty fruit bunch using steam explosion and hydrolized with 10% solution of HCl. Steam explosion coupled with acid hydrolysis pretreatment on the oil palm empty fruit bunch was very effective in the depolymerization and defibrillation process of the fibre to produce fibers in nanodimension. Structural analysis of steam exploded fibers was determined by Fourier Transform Infrared (FT-IR) spectroscopy. Thermal stability of cellulose measured using image analysis software image J. Characterization of the fibers by TEM and SEM displayed that fiber diameter decreases with mechanical-chemical treatment and final nanofibril size was 20-30 nm. FT-IR and TGA data confirmed the removal of hemicellulose and lignin during the chemical treatment process.

  8. Fuels from renewable resources

    NASA Astrophysics Data System (ADS)

    Hoffmann, L.; Schnell, C.; Gieseler, G.

    Consideration is given to fuel substitution based on regenerative plants. Methanol can be produced from regenerative plants by gasification followed by the catalytic hydration of carbon oxides. Ethanol can be used as a replacement fuel in gasoline and diesel engines and its high-knock rating allows it to be mixed with lead-free gasoline. Due to the depletion of oil and gas reserves, fermentation alcohol is being considered. The raw materials for the fermentation process can potentially include: (1) sugar (such as yeasts, beet or cane sugar); (2) starch (from potatoes or grain) and (3) cellulose which can be hydrolized into glucose for fermentation.

  9. [Analysis of primary elemental speciation distribution in mungbean during enzymatic hydrolization].

    PubMed

    Li, Ji-Hua; Huang, Mao-Fang; Zhu, De-Ming; Zheng, Wei-Wan; Zhong, Ye-Jun

    2009-03-01

    In the present paper, trace elements contents of cuprum, zincum, manganese and ferrum in mungbean and their primary speciation distribution during enzymatic hydrolization were investigated with ICP-AES OPTIMA 5300DV plasma emission spectroscopy. The trace elements were separated into two forms, i.e. dissolvable form and particulate form, by cellulose membrane with 0.45 microm of pore diameter. All the samples were digested by strong acid (perchloric acid and nitric acid with 1 : 4 ratio ). The parameters of primary speciations of the four elements were calculated and discussed. The results showed: (1) Contents of cuprum, zincum, manganese and ferrum in mungbean were 12.77, 31.26, 18.14 and 69.38 microg x g(-1) (of dry matter), respectively. Different treatment resulted in different elemental formulation in product, indicating that more attention should be paid to the trace elements pattern when producing mungbean beverage with different processes. (2) Extraction rates of cuprum, zincum, manganese and ferrum in extract were 68.84%, 51.84%, 63.97% and 30.40% with enzymatic treatments and 36.22%, 17.58%, 7.85% and 22.99% with boil treatment, respectively. Both boil and enzymatic treatments led to poor elemental extraction rates, which proved that it was necessary to take deep enzymatic hydrolysis treatment in mungbean beverage process as the trace element utilization rate was concerned. (3) Amylase, protease and cellulose showed different extraction effectiveness of the four trace elements. Generally, protease exhibited highest efficiency for the four elements extraction. All of the four trace elements were mostly in dissolvable form in all hydrolysates and soup. (4) Relative standard deviations and recovery yields are within 0.12%-0.90% (n = 11) and 98.6%-101.4%, respectively. The analysis method in this paper proved to be accurate.

  10. Data-based mechanistic modeling of dissolved organic carbon load through storms using continuous 15-minute resolution observations within UK upland watersheds

    NASA Astrophysics Data System (ADS)

    Jones, T.; Chappell, N. A.

    2013-12-01

    Few watershed modeling studies have addressed DOC dynamics through storm hydrographs (notable exceptions include Boyer et al., 1997 Hydrol Process; Jutras et al., 2011 Ecol Model; Xu et al., 2012 Water Resour Res). In part this has been a consequence of an incomplete understanding of the biogeochemical processes leading to DOC export to streams (Neff & Asner, 2001, Ecosystems) & an insufficient frequency of DOC monitoring to capture sometimes complex time-varying relationships between DOC & storm hydrographs (Kirchner et al., 2004, Hydrol Process). We present the results of a new & ongoing UK study that integrates two components - 1/ New observations of DOC concentrations (& derived load) continuously monitored at 15 minute intervals through multiple seasons for replicated watersheds; & 2/ A dynamic modeling technique that is able to quantify storage-decay effects, plus hysteretic, nonlinear, lagged & non-stationary relationships between DOC & controlling variables (including rainfall, streamflow, temperature & specific biogeochemical variables e.g., pH, nitrate). DOC concentration is being monitored continuously using the latest generation of UV spectrophotometers (i.e. S::CAN spectro::lysers) with in situ calibrations to laboratory analyzed DOC. The controlling variables are recorded simultaneously at the same stream stations. The watersheds selected for study are among the most intensively studied basins in the UK uplands, namely the Plynlimon & Llyn Brianne experimental basins. All contain areas of organic soils, with three having improved grasslands & three conifer afforested. The dynamic response characteristics (DRCs) that describe detailed DOC behaviour through sequences of storms are simulated using the latest identification routines for continuous time transfer function (CT-TF) models within the Matlab-based CAPTAIN toolbox (some incorporating nonlinear components). To our knowledge this is the first application of CT-TFs to modelling DOC processes. Furthermore this allows a data-based mechanistic (DBM) modelling philosophy to be followed where no assumptions about processes are defined a priori (given that dominant processes are often not known before analysis) & where the information contained in the time-series is used to identify multiple structures of models that are statistically robust. Within the final stage of DBM, biogeochemical & hydrological processes are interpreted from those models that are observable from the available stream time-series. We show that this approach can simulate the key features of DOC dynamics within & between storms & that some of the resultant response characteristics change with varying DOC processes in different seasons. Through the use of MISO (multiple-input single-output) models we demonstrate the relative importance of different variables (e.g., rainfall, temperature) in controlling DOC responses. The contrasting behaviour of the six experimental catchments is also reflected in differing response characteristics. These characteristics are shown to contribute to understanding of basin-integrated DOC export processes & to the ecosystem service impacts of DOC & color on commercial water treatment within the surrounding water supply basins.

  11. Chemical composition and biological activity of Abies alba and A. koreana seed and cone essential oils and characterization of their seed hydrolates.

    PubMed

    Wajs-Bonikowska, Anna; Sienkiewicz, Monika; Stobiecka, Agnieszka; Maciąg, Agnieszka; Szoka, Łukasz; Karna, Ewa

    2015-03-01

    The chemical composition, including the enantiomeric excess of the main terpenes, the antimicrobial and antiradical activities, as well as the cytotoxicity of Abies alba and A. koreana seed and cone essential oils were investigated. Additionally, their seed hydrolates were characterized. In the examined oils and hydrolates, a total of 174 compounds were identified, which comprised 95.6-99.9% of the volatiles. The essential oils were mainly composed of monoterpene hydrocarbons, whereas the composition of the hydrolates, differing from the seed oils of the corresponding fir species, consisted mainly of oxygenated derivatives of sesquiterpenes. The seed and cone essential oils of both firs exhibited DPPH-radical-scavenging properties and low antibacterial activity against the bacterial strains tested. Moreover, they evoked only low cytotoxicity towards normal fibroblasts and the two cancer cell lines MCF-7 and MDA-MBA-231. At concentrations up to 50 μg/ml, all essential oils were safe in relation to normal fibroblasts. Although they induced cytotoxicity towards the cancer cells at concentrations slightly lower than those required for the inhibition of fibroblast proliferation, their influence on cancer cells was weak, with IC50 values similar to those observed towards normal fibroblasts. Copyright © 2015 Verlag Helvetica Chimica Acta AG, Zürich.

  12. Reply to comment on 'Investigating ponding depth and soil detachability for a mechanistic erosion model using a simple experiment' by Gao, B., et al., 2003. Journal of Hydrology 277, 116-124

    NASA Astrophysics Data System (ADS)

    Rose, C. W.; Gao, Bin; Walter, M. T.; Steenhuis, T. S.; Parlange, J.-Y.; Nakano, K.; Hogarth, W. L.

    2004-04-01

    Kinnell [J. Hydrol. XXXX] explained that the conclusions reached by the critical experiments reported by Gao et al. [J. Hydrol. 277 (2003) 116-124] were in agreement with his findings, and those of others. This reply emphasizes the practical significance of the Gao et al. findings to field erosion studies.

  13. Single-cell protein from waste cellulose

    NASA Technical Reports Server (NTRS)

    Dunlap, C. E.; Callihan, C. D.

    1973-01-01

    The recycle, reuse, or reclamation of single cell protein from liquid and solid agricultural waste fibers by a fermentation process is reported. It is shown that cellulose comprises the bulk of the fibers at 50% to 55% of the dry weight of the refuse and that its biodegradability is of prime importance in the choice of a substrate. The application of sodium hydroxide followed by heat and pressure serves to de-polymerize and disrupt lignin structure while swelling the cellulose to increase water uptake and pore volume. Some of the lignin, hemi-celluloses, ash, and cellulose of the material is hydrolized and solubilized. Introduction of microorganisms to the substrate fibers mixed with nutrients produces continuous fermentation of cellulose for further protein extraction and purification.

  14. Antimicrobial Activities of a Plethora of Medicinal Plant Extracts and Hydrolates against Human Pathogens and Their Potential to Reverse Antibiotic Resistance

    PubMed Central

    Njimoh, Dieudonné Lemuh; Assob, Jules Clement N.; Mokake, Seraphine Ebenye; Nyhalah, Dinga Jerome; Yinda, Claude Kwe; Sandjon, Bertrand

    2015-01-01

    Microbial infections till date remain a scourge of humanity due to lack of vaccine against some infections, emergence of drug resistant phenotypes, and the resurgence of infections amongst others. Continuous quest for novel therapeutic approaches remains imperative. Here we (i) assessed the effects of extracts/hydrolates of some medicinal plants on pathogenic microorganisms and (ii) evaluated the inhibitory potential of the most active ones in combination with antibiotics. Extract E03 had the highest DZI (25 mm). Extracts E05 and E06 were active against all microorganisms tested. The MICs and MBCs of the methanol extracts ranged from 16.667 × 103  μg/mL to 2 μg/mL and hydrolates from 0.028 to 333333 ppm. Extract E30 had the highest activity especially against S. saprophyticus (MIC of 6 ppm) and E. coli (MIC of 17 ppm). Combination with conventional antibiotics was shown to overcome resistance especially with E30. Analyses of the extracts revealed the presence of alkaloids, flavonoids, triterpenes, steroids, phenols, and saponins. These results justify the use of these plants in traditional medicine and the practice of supplementing decoctions/concoctions with conventional antibiotics. Nauclea pobeguinii (E30), the most active and synergistic of all these extracts, and some hydrolates with antimicrobial activity need further exploration for the development of novel antimicrobials. PMID:26180528

  15. Swift delineation of flood-prone areas over large European regions

    NASA Astrophysics Data System (ADS)

    Tavares da Costa, Ricardo; Castellarin, Attilio; Manfreda, Salvatore; Samela, Caterina; Domeneghetti, Alessio; Mazzoli, Paolo; Luzzi, Valerio; Bagli, Stefano

    2017-04-01

    According to the European Environment Agency (EEA Report No 1/2016), a significant share of the European population is estimated to be living on or near a floodplain, with Italy having the highest population density in flood-prone areas among the countries analysed. This tendency, tied with event frequency and magnitude (e.g.: the 24/11/2016 floods in Italy) and the fact that river floods may occur at large scales and at a transboundary level, where data is often sparse, presents a challenge in flood-risk management. The availability of consistent flood hazard and risk maps during prevention, preparedness, response and recovery phases are a valuable and important step forward in improving the effectiveness, efficiency and robustness of evidence-based decision making. The present work aims at testing and discussing the usefulness of pattern recognition techniques based on geomorphologic indices (Manfreda et al., J. Hydrol. Eng., 2011, Degiorgis et al., J Hydrol., 2012, Samela et al., J. Hydrol. Eng., 2015) for the simplified mapping of river flood-prone areas at large scales. The techniques are applied to 25m Digital Elevation Models (DEM) of the Danube, Po and Severn river watersheds, obtained from the Copernicus data and information funded by the European Union - EU-DEM layers. Results are compared to the Pan-European flood hazard maps derived by Alfieri et al. (Hydrol. Proc., 2013) using a set of distributed hydrological (LISFLOOD, van der Knijff et al., Int. J. Geogr. Inf. Sci., 2010, employed within the European Flood Awareness System, www.efas.eu) and hydraulic models (LISFLOOD-FP, Bates and De Roo, J. Hydrol., 2000). Our study presents different calibration and cross-validation exercises of the DEM-based mapping algorithms to assess to which extent, and with which accuracy, they can be reproduced over different regions of Europe. This work is being developed under the System-Risk project (www.system-risk.eu) that received funding from the European Union's Framework Programme for Research and Innovation Horizon 2020 under the Marie Skłodowska-Curie Grant Agreement No. 676027. Keywords: flood hazard, data-scarce regions, large-scale studies, pattern recognition, linear binary classifiers, basin geomorphology, DEM.

  16. Comment of "Event-based soil loss models for construction sites" by Trenouth and Gharabaghi, J. Hydrol. doi: 10.1016/jhydrol.2015.03.010

    NASA Astrophysics Data System (ADS)

    Kinnell, P. I. A.

    2015-09-01

    Trenouth and Gharabaghi (2015) present two models which replace the EI30 index used as the event erosivity index in the USLE/RUSLE with ones that include runoff and values of EI30 to powers that differ for 1.0 as the event erosivity factor in modelling soil loss for construction sites. Their analysis on the application of these models focused on data from 5 locations as a whole but did not show how the models worked at each location. Practically, the ability to predict sediment yields at a specific location is more relevant than the capacity of a model to predict sediment yields globally. Also, the mathematical structure of their proposed models shows little regard to the physical processes involved in causing erosion and sediment yield. There is still the need to develop event-based empirical models for construction sites that are robust because they give proper consideration to the erosion process involved, and take account of the fact that sediment yield is usually determined from measurements of suspended load whereas soil loss at the scale for which the USLE/RUSLE model was developed includes both suspended load and bed load.

  17. The effect of CO2 activation temperature on the physical and electrochemical properties of activated carbon monolith from banana stem waste

    NASA Astrophysics Data System (ADS)

    Taer, E.; Susanti, Y.; Awitdrus, Sugianto, Taslim, R.; Setiadi, R. N.; Bahri, S.; Agustino, Dewi, P.; Kurniasih, B.

    2018-02-01

    The effect of CO2 activation on the synthesis of activated carbon monolith from banana stem waste has been studied. Physical characteristics such as density, degree of crystallinity, surface morphology and elemental content has been analyzed, supporting the finding of an excellent electrochemical properties for the supercapacitor. The synthesis of activated carbon electrode began with pre-carbonization process at temperature of 250°C for 2.5 h. Then the process was continued by chemical activation using KOH as activating agent with a concentration of 0.4 M. The pellets were formed with 8 ton hydrolic pressure. All the samples were carbonized at a temperature of 600°C, followed by physical activation using CO2 gas at a various temperatures ranging from 800°C, 850°C, 900°C and 950°C for 2 h. The carbon content was increased with increasing temperature and the optimum temperature was 900°C. The specific capacitance depends on the activation temperature with the highest specific capacitance of 104.2 F/g at the activation temperature of 900°C.

  18. Influence of pH on wetting kinetics of a pine forest soil

    NASA Astrophysics Data System (ADS)

    Amer, Ahmad; Schaumann, Gabriele; Diehl, Dörte

    2014-05-01

    Water repellent properties of organic matter significantly alter soil water dynamics. Various environmental factors control appearance and breakup of repellency in soil. Beside water content and temperature also pH exerts an influence on soil water repellency although investigations achieved partly ambiguous results; some found increasing repellency with increasing pH (Terashima et al. 2004; Duval et al. 2005), other with decreasing pH (Karnok et al. 1993; Roper 2005) and some found repellency maxima at intermediate pH and an increase with decreasing and with increasing pH (Bayer and Schaumann 2007; Diehl et al. 2010). The breakup of repellency may be observed via the time dependent sessile drop contact angle (TISED). With water contact time, soil-water contact angle decreases until complete wetting is reached. Diehl and Schaumann (2007) calculated the activation energy of the wetting process from the rate of sessile drop wetting obtained at different temperatures and draw conclusions on chemical or physical nature of repellency. The present study aims at the influence of pH on the wetting kinetics of soil. Therefore, TISED of soil was determined as a function of pH and temperature. We used upper soil samples (0 - 10 cm) from a pine forest in the southwest of Germany (Rheinland-Pfalz). Samples were air-dried, sieved < 1.0 mm and pH was modified by NH3 and HCl gas (Diehl et al. 2010) and measured electrometrically in 0.01 M CaCl2 solution. TISED measurements (2007)were conducted at 10, 20 and 30 oC using OCA 15 Contact Angle Meter (Dataphysics, Germany) on three replications for each soil sample. Apparent work of adhesion was calculated, plotted vs. time and mathematically fitted using double exponential function. Rate constants of wetting were used to determine the activation energy by Arrhenius equation. First results indicated that despite comparable initial contact angles, pH alteration strongly changed the wetting rate suggesting maximum wetting resistance at the natural pH of 4.3 and decreasing wetting resistance at lower and at higher pH. The poster will present further current results of the ongoing study and discuss the activation energy of the wetting process in dependence of artificially altered soil pH. References: Bayer, J. V. and G. E. Schaumann (2007). Hydrol. Processes 21(17): 2266 - 2275. Diehl, D., J. V. Bayer, et al. (2010). Geoderma 158(3-4): 375-384. Diehl, D. and G. E. Schaumann (2007). Hydrol. Processes 21(17): 2255 - 2265. Duval, J. F. L., K. J. Wilkinson, et al. (2005). Environ Sci Technol 39(17): 6435-6445. Karnok, K. A., E. J. Rowland, et al. (1993). Agron J 85(5): 983-986. Roper, M. M. (2005). Aust J Soil Res 43: 803-810. Terashima, M., M. Fukushima, et al. (2004). Colloids and Surfaces, A: Physicochemical and Engineering Aspects 247(1-3): 77-83.

  19. Understanding processes that generate flash floods in the arid Judean Desert to the Dead Sea - a measurement network

    NASA Astrophysics Data System (ADS)

    Hennig, Hanna; Rödiger, Tino; Laronne, Jonathan B.; Geyer, Stefan; Merz, Ralf

    2016-04-01

    Flash floods in (semi-) arid regions are fascinating in their suddenness and can be harmful for humans, infrastructure, industry and tourism. Generated within minutes, an early warning system is essential. A hydrological model is required to quantify flash floods. Current models to predict flash floods are often based on simplified concepts and/or on concepts which were developed for humid regions. To more closely relate such models to local conditions, processes within catchments where flash floods occur require consideration. In this study we present a monitoring approach to decipher different flash flood generating processes in the ephemeral Wadi Arugot on the western side of the Dead Sea. To understand rainfall input a dense rain gauge network was installed. Locations of rain gauges were chosen based on land use, slope and soil cover. The spatiotemporal variation of rain intensity will also be available from radar backscatter. Level pressure sensors located at the outlet of major tributaries have been deployed to analyze in which part of the catchment water is generated. To identify the importance of soil moisture preconditions, two cosmic ray sensors have been deployed. At the outlet of the Arugot water is sampled and level is monitored. To more accurately determine water discharge, water velocity is measured using portable radar velocimetry. A first analysis of flash flood processes will be presented following the FLEX-Topo concept .(Savenije, 2010), where each landscape type is represented using an individual hydrological model according to the processes within the three hydrological response units: plateau, desert and outlet. References: Savenije, H. H. G.: HESS Opinions "Topography driven conceptual modelling (FLEX-Topo)", Hydrol. Earth Syst. Sci., 14, 2681-2692, doi:10.5194/hess-14-2681-2010, 2010.

  20. Vadose Zone Monitoring as a Key to Groundwater Protection from Pollution Hazard

    NASA Astrophysics Data System (ADS)

    Dahan, Ofer

    2016-04-01

    Minimization subsurface pollution is much dependent on the capability to provide real-time information on the chemical and hydrological properties of the percolating water. Today, most monitoring programs are based on observation wells that enable data acquisitions from the saturated part of the subsurface. Unfortunately, identification of pollutants in well water is clear evidence that the contaminants already crossed the entire vadose-zone and accumulated in the aquifer water to detectable concentration. Therefore, effective monitoring programs that aim at protecting groundwater from pollution hazard should include vadose zone monitoring technologies that are capable to provide real-time information on the chemical composition of the percolating water. Obviously, identification of pollution process in the vadose zone may provide an early warning on potential risk to groundwater quality, long before contaminates reach the water-table and accumulate in the aquifers. Since productive agriculture must inherently include down leaching of excess lower quality water, understanding the mechanisms controlling transport and degradation of pollutants in the unsaturated is crucial for water resources management. A vadose-zone monitoring system (VMS), which was specially developed to enable continuous measurements of the hydrological and chemical properties of percolating water, was used to assess the impact of various agricultural setups on groundwater quality, including: (a) intensive organic and conventional greenhouses, (b) citrus orchard and open field crops , and (c) dairy farms. In these applications frequent sampling of vadose zone water for chemical and isotopic analysis along with continuous measurement of water content was used to assess the link between agricultural setups and groundwater pollution potential. Transient data on variation in water content along with solute breakthrough at multiple depths were used to calibrate flow and transport models. These models where then used to assess the long term impact of various agricultural setups on the quantity and quality of groundwater recharge. Relevant publications: Turkeltaub et al., WRR. 2016; Turkeltaub et al., J. Hydrol. 2015: Dahan et al., HESS 2014. Baram et al., J. Hydrol. 2012.

  1. Climate impact on groundwater systems: the past is the key to the future

    NASA Astrophysics Data System (ADS)

    van der Ploeg, Martine; Cendón, Dioni; Haldorsen, Sylvi; Chen, Jinyao; Gurdak, Jason; Tujchneider, Ofelia; Vaikmäe, Rein; Purtschert, Roland; Chkir Ben Jemâa, Najiba

    2013-04-01

    Groundwater is a significant part of the global hydrological cycle and supplies fresh drinking water to almost half of the world's population. While groundwater supplies are buffered against short-term effects of climate variability, they can be impacted over longer time scales through changes in precipitation, ,evaporation, recharge rate, melting of glaciers or permafrost, vegetation, and land-use. Moreover, uncontrolled groundwater extraction has and will lead to irreversible depletion of fresh water resources in many areas. The impact of climate variability and groundwater extraction on the resilience of groundwater systems is still not fully understood (Green et al. 2011). Groundwater stores environmental and climatic information acquired during the recharge process, which integrates different signals, like recharge temperature, origin of precipitation, and dissolved constituents. This information can be used to estimate palaeo recharge temperatures, palaeo atmospheric dynamics and residence time of groundwater within the aquifer (Stute et al. 1995, Clark and Fritz 1997, Collon et al. 2000, Edmunds et al. 2003, Cartwright et al. 2007, Kreuzer et al. 2009, Currell et al. 2010, Raidla et al. 2012, Salem et al. 2012). The climatic signals incorporated by groundwater during recharge have the potential to provide a regionally integrated proxy of climatic variations at the time of recharge. Groundwater palaeoclimate information is affected by diffusion-dispersion processes (Davison and Airey, 1982) and/or water-rock interaction (Clark and Fritz, 1997), making palaeoclimate information deduced from groundwater inherently a low resolution record. While the signal resolution can be limited, recharge follows major climatic events, and more importantly, shows how those aquifers and their associated recharge varies under climatic forcing. While the characterization of groundwater resources, surface-groundwater interactions and their link to the global water cycle are an important focus, little attention has been given to groundwater as a potential record of past climate variations. A groundwater system's history is vital to forecast its vulnerability under future and potentially adverse climatic changes. By processing groundwater information from vast regions and different continents, recharge and palaeoclimate can be correlated at a global scale. To successfully evaluate the sustainability of groundwater resources, "the past is the key to the future". To address the identified lack of palaeoclimatic data available from groundwater studies, a global collaboration has been set-up in 2011 called Groundwater@Global Palaeoclimate Signals (www.gw-gps.com), and has already more than 70 participants from 5 continents. Since 2012 G@GPS receives seed funding to support meetings by the International Geoscience Programme, the International Union for Quaternary Research and UNESCO-GRAPHIC International Hydrologic Project. This collaboration targets groundwater basins on five continents —Africa, America, Asia, Australia, Europe — containing vast groundwater resources with an estimated dependence of tens of millions of people. We will present G@GPS, show examples from groundwater basins, and discuss possibilities to integrate groundwater information from these basins. References Cartwright, I. et al. 2007. Consraining modern and historical recharge from bore hydrographs, 3H, 14C, and chloride concentrations: Applications to dual-porosity aquifers in dryland salinity areas, Murray Basin, Australia. J. Hydrol. 332: 69-92. Clark, I. and P. Fritz. 1997. Environmental isotopes in hydrogeology, Lewis Publishers. Collon, P. et al. 2000. 81Kr in the Great Artesian Basin, Australia: a new method for dating very old groundwater. Earth and Planetary Science Letters 182: 103-113. Currell, M. J. et al. 2010. Recharge history and controls on groundwater quality in the Yuncheng Basin, north China, J. Hydrol. 385: 216-229. Davison, M. R. and P. L. Airey. 1982. The effect of dispersion on the establishment of a paleoclimatic record from groundwater. J. Hydrol. 58: 131-147. Edmunds, W. M. et al. 2003. Groundwater evolution in the Continental Intercalaire aquifer of southern Algeria and Tunisia: trace element and isotopic indicators, Applied Geochemistry 18: 805-822. Green, T.R. et al. 2011. Beneath the surface of global change: Impacts of climate change on groundwater. J. Hydrol 405: 532-560. Kreuzer, A. M. et al. 2009. A record of temperature and monsoon intensity over the past 40 kyr from groundwater in the North China Plain, Chemical Geology 259: 168-180. Raidla, V., Kirsimäe, K., Vaikmäe, R., Kaup, E., and Martma, T., 2012, Carbon isotope systematics of the Cambrian-Vendian aquifer system in the northern Baltic Basin: Implications to the age and evolution of groundwater: Applied Geochemistry, v. 27(10), p. 2042-2052. Salem, S.B.H., Chkir, N., Zouari, K., Cognard-Plancq , A. L., Valles, V, and Marc, V., 2012, Natural and artificial recharge investigation in the Zéroud Basin,Central Tunisia: impact of Sidi Saad Dam storage. Environmental Earth Sciences, v., 66, p. 1099-1110. Stute M., Forster M., Frischkorn H., Serejo A., Clark J. F., Schlosser P., Broecker W. S., and Bonani G. (1995) Cooling of tropical Brazil (5 °C) during the Last Glacial Maximum. Science 269, 379-383.

  2. Detection of dominant runoff generation processes in flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Iacobellis, Vito; Fiorentino, Mauro; Gioia, Andrea; Manfreda, Salvatore

    2010-05-01

    The investigation on hydrologic similarity represents one of the most exciting challenges faced by hydrologists in the last few years, in order to reduce uncertainty on flood prediction in ungauged basins (e.g., IAHS Decade on Predictions in Ungauged Basins (PUB) - Sivapalan et al., 2003). In perspective, the identification of dominant runoff generation mechanisms may provide a strategy for catchment classification and identification hydrologically omogeneous regions. In this context, we exploited the framework of theoretically derived flood probability distributions, in order to interpret the physical behavior of real basins. Recent developments on theoretically derived distributions have highlighted that in a given basin different runoff processes may coexistence and modify or affect the shape of flood distributions. The identification of dominant runoff generation mechanisms represents a key signatures of flood distributions providing an insight in hydrologic similarity. Iacobellis and Fiorentino (2000) introduced a novel distribution of flood peak annual maxima, the "IF" distribution, which exploited the variable source area concept, coupled with a runoff threshold having scaling properties. More recently, Gioia et al (2008) introduced the Two Component-IF (TCIF) distribution, generalizing the IF distribution, based on two different threshold mechanisms, associated respectively to ordinary and extraordinary events. Indeed, ordinary floods are mostly due to rainfall events exceeding a threshold infiltration rate in a small source area, while the so-called outlier events, often responsible of the high skewness of flood distributions, are triggered by severe rainfalls exceeding a threshold storage in a large portion of the basin. Within this scheme, we focused on the application of both models (IF and TCIF) over a considerable number of catchments belonging to different regions of Southern Italy. In particular, we stressed, as a case of strong general interest in the field of statistical hydrology, the role of procedures for parameters estimation and techniques for model selection in the case of nested distributions. References Gioia, A., V. Iacobellis, S. Manfreda, M. Fiorentino, Runoff thresholds in derived flood frequency distributions, Hydrol. Earth Syst. Sci., 12, 1295-1307, 2008. Iacobellis, V., and M. Fiorentino (2000), Derived distribution of floods based on the concept of partial area coverage with a climatic appeal, Water Resour. Res., 36(2), 469-482. Sivapalan, M., Takeuchi, K., Franks, S. W., Gupta, V. K., Karambiri, H., Lakshmi, V., Liang, X., McDonnell, J. J., Mendiondo, E. M., O'Connell, P. E., Oki, T., Pomeroy, J. W., Schertzer, D., Uhlenbrook, S. and Zehe, E.: IAHS Decade on Predictions in Ungauged Basins (PUB), 2003-2012: Shaping an exciting future for the hydrological sciences, Hydrol. Sci. J., 48(6), 857-880, 2003.

  3. Flood regimes in a changing world: What do we know?

    NASA Astrophysics Data System (ADS)

    Bloeschl, G.

    2015-12-01

    There has been a surprisingly large number of major floods in the last years around the world which suggests that floods may have increased and will continue to increase in the next decades. However, the realism of such changes is still hotly discussed in the literature. In this presentation I will argue that a fresh look is needed at the flood change problem in terms of the causal factors including river training, land use changes and climate variability. Analysing spatial patterns of dynamic flood characteristics helps learn form the rich diversity of flood processes across the landscape. I will present a number of examples across Europe to illustrate the range of flood generation processes and the causal factors of changes in the flood regime. On the basis of these examples, I will demonstrate how comparative hydrology can assist in learning from the differences of flood characteristics between catchments both for present and future conditions. Focus on the interactions of the natural and human water system will be instrumental in making meaningful statements about future floods in a changing world. References Hall et al. (2014) Understanding Flood Regime Changes in Europe: A state of the art assessment. Hydrol. Earth Sys. Sc., 18, 2735-2772. Blöschl et al. (2015) Increasing river floods: fiction or reality? Wiley Interdisciplinary Reviews: Water. doi: 10.1002/wat2.1079

  4. Summary Report of the NSF/EPA WATERS Network Workshop

    EPA Science Inventory

    The National Science Foundation (NSF) and The U.S. Environmental Protection Agency (EPA) organized a workshop to support The WATer and Environmental Research Systems (WATERS) Network project. The WATERS Network is a new joint initiative of the environmental engineering and hydrol...

  5. Inter-Comparison of Retrieved and Modelled Soil Moisture and Coherency of Remotely Sensed Hydrology Data

    NASA Astrophysics Data System (ADS)

    Kolassa, Jana; Aires, Filipe

    2013-04-01

    A neural network algorithm has been developed for the retrieval of Soil Moisture (SM) from global satellite observations. The algorithm estimates soil moisture from a synergy of passive and active microwave, infrared and visible satellite observations in order to capture the different SM variabilities that the individual sensors are sensitive to. The advantages and drawbacks of each satellite observation have been analysed and the information type and content carried by each observation have been determined. A global data set of monthly mean soil moisture for the 1993-2000 period has been computed with the neural network algorithm (Kolassa et al., in press, 2012). The resulting soil moisture retrieval product has then been used in an inter-comparison study including soil moisture from (1) the HTESSEL model (Balsamo et al., 2009), (2) the WACMOS satellite product (Liu et al., 2011), and (3) in situ measurements from the International Soil Moisture Network (Dorigo et al., 2011). The analysis showed that the satellite remote sensing products are well-suited to capture the spatial variability of the in situ data and even show the potential to improve the modelled soil moisture. Both satellite retrievals also display a good agreement with the temporal structures of the in situ data, however, HTESSEL appears to be more suitable for capturing the temporal variability (Kolassa et al., in press, 2012). The use of this type of neural network approach is currently being investigated as a retrieval option for the SMOS mission. Our soil moisture retrieval product has also been used in a coherence study with precipitation data from GPCP (Adler et al., 2003) and inundation estimates from GIEMS (Prigent et al., 2007). It was investigated on a global scale whether the three observation-based datasets are coherent with each other and show the expected behaviour. For most regions of the Earth, the datasets were consistent and the behaviour observed could be explained with the known hydrological processes. In addition, a regional analysis was conducted over several large river basins, including a detailed analysis of the time-lagged correlations between the three datasets and the spatial propagation of observed signals. Results appear consistent with the knowledge of the hydrological processes governing the individual basins. References Adler, R.F., G.J. Huffman, A. Chang, R. Ferraro, P. Xie, J. Janowiak, B. Rudolf, U. Schneider, S. Curtis, D. Bolvin, A. Gruber, J. Susskind, and P. Arkin (2003), The Version 2 Global Precipita- tion Climatology Project (GPCP) Monthly Precipitation Analysis (1979-Present).J. Hydrometeor., 4,1147-1167. Balsamo, G., Viterbo, P., Beljaars, A., van den Hurk, B., Hirschi, M., Betts, A. and Scipa,l K. (2009) A Revised Hydrology for the ECMWF Model: Verification from Field Site to Terrestrial Water Storage and Impact in the Integrated Forecast System, J. Hydrol., 10, 623-643 Dorigo, W. A., Wagner, W., Hohensinn, R., Hahn, S., Paulik, C., Xaver, A., Gruber, A., Drusch, M., Mecklenburg, S., van Oevelen, P., Robock, A., and Jackson, T. (2011), The International Soil Moisture Network: a data hosting facility for global in situ soil moisture measurements, Hydrol. Earth Syst. Sci., 15, 1675-1698 Kolassa, J., Aires, F., Polcher, J., Prigent, C., and Pereira, J. (2012), Soil moisture Retrieval from Multi-instrument Observations: Information Content Analysis and Retrieval Methodology (2012), J. Geophys. Res., Liu, Y. Y., Parinussa, R. M., Dorigo, W. A., De Jeu, R. A. M., Wagner, W., van Dijk, A. I. J. M., McCabe, M. F., and Evans, J. P.(2011), Developing an improved soil moisture dataset by blending passive and active microwave satellite-based retrievals, Hydrol. Earth Syst. Sci., 15, 425-436. Prigent, C., F. Papa, F. Aires, W. B. Rossow, and E. Matthews (2007), Global inundation dy- namics inferred from multiple satellite observations, 1993-2000, J. Geophys. Res., 112, D12107, doi:10.1029/2006JD007847.

  6. A Streamlined Monitoring Framework for Low Impact Development Stormwater Management Practices - Albuquerque

    EPA Science Inventory

    In many respects, the collection, of monitoring data has become standard "boilerplate" in grant proposals that fund non point source management projects. This approach typically calls for a full suite of parameters to be measured, even if the grant objectives are such that hydrol...

  7. Corrigendum to "Data-worth analysis through probabilistic collocation-based Ensemble Kalman Filter" [J. Hydrol. 540 (2016) 488-503

    NASA Astrophysics Data System (ADS)

    Dai, Cheng; Xue, Liang; Zhang, Dongxiao; Guadagnini, Alberto

    2018-02-01

    The authors regret that in the Acknowledgments Section an incorrect Grant Agreement number was reported for the Project "Furthering the knowledge Base for Reducing the Environmental Footprint of Shale Gas Development" FRACRISK. The correct Grant Agreement number is 636811.

  8. Data Assimilation Considerations for Improved Ocean Predictability During the Gulf of Mexico Grand Lagrangian Deployment (GLAD)

    DTIC Science & Technology

    2014-09-16

    scales below the mesoscale are achievable. Other planned observation systems include the Surface Water/Ocean Topography ( SWOT ) satellite mission...atmospheric. Oceanic Hydrol. Appl . II, Chapter 13. Cummings, J., Bertino, L., Brasseur, P., Fukumori, I., Kamachi, M., Martin, M.J., Mogensen, K

  9. Problems and Prospects of Swat Model Application on an Arid/Semi-Arid Watershed in Arizona

    EPA Science Inventory

    Hydrological characteristics in the semi-arid southwest create unique challenges to watershed modellers. Streamflow in these regions is largely dependent on seasonal, short term, and high intensity rainfall events. The objectives of this study are: 1) to analyze the unique hydrol...

  10. Skin beautification with oral non-hydrolized versions of carnosine and carcinine: Effective therapeutic management and cosmetic skincare solutions against oxidative glycation and free-radical production as a causal mechanism of diabetic complications and skin aging.

    PubMed

    Babizhayev, Mark A; Deyev, Anatoliy I; Savel'yeva, Ekaterina L; Lankin, Vadim Z; Yegorov, Yegor E

    2012-10-01

    Advanced glycation Maillard reaction end products (AGEs) are causing the complications of diabetes and skin aging, primarily via adventitious and cross-linking of proteins. Long-lived proteins such as structural collagen are particularly implicated as pathogenic targets of AGE processes. The formation of α-dicarbonyl compounds represents an important step for cross-linking proteins in the glycation or Maillard reaction. The purpose of this study was to investigate the contribution of glycation coupled to the glycation free-radical oxidation reactions as markers of protein damage in the aging of skin tissue proteins and diabetes. To elucidate the mechanism for the cross-linking reaction, we studied the reaction between a three-carbon α-dicarbonyl compound, methylglyoxal, and amino acids using EPR spectroscopy, a spectrophotometric kinetic assay of superoxide anion production at the site of glycation and a chemiluminescence technique. The transglycating activity, inhibition of transition metal ions peroxidative catalysts, resistance to hydrolysis of carnosine mimetic peptide-based compounds with carnosinase and the protective effects of carnosine, carcinine and related compounds against the oxidative damage of proteins and lipid membranes were assessed in a number of biochemical and model systems. A 4-month randomized, double-blind, controlled study was undertaken including 42 subjects where the oral supplement of non-hydrolized carnosine (Can-C Plus® formulation) was tested against placebo for 3 months followed by a 1-month supplement-free period for both groups to assess lasting effects. Assessment of the age-related skin parameters and oral treatment efficacy measurements included objective skin surface evaluation with Visioscan® VC 98 and visual assessment of skin appearance parameters. The results together confirm that a direct one-electron transfer between a Schiff base methylglyoxal dialkylimine (or its protonated form) and methylglyoxal is responsible for the generation of the cross-linked radical cation and the radical counteranion of methylglyoxal. Under aerobic conditions, molecular oxygen can then accept an electron from the methylglyoxal anion to generate the superoxide radical anion causing the propagation of oxidative stress chain reactions in the presence of transition metal ions. Carnosine stabilized from enzymatic hydrolysis, carcinine and leucyl-histidylhydrazide in patented formulations thereof, demonstrate the Schiff bases' transglycating activities concomitant with glycation site specific antioxidant activities and protection of proprietary antioxidant enzymes in the skin during aging and with diabetes lesions. During oral supplementation with stabilized from enzymatic hydrolysis carnosine (Can-C Plus® formulation), the skin parameters investigated showed a continuous and significant improvement in the active group during the 3 months of supplementation as compared to placebo. Visual investigation showed improvement of the overall skin appearance and a reduction of fine lines. No treatment-related side effects were reported. The finding that already-formed AGE cross-links can be pharmacologically severed and attendant pathology thereby reversed by non-hydrolized carnosine or carcinine in patented oral formulations thereof has broad implications for the skin beautification and therapeutics of the complications of diabetes and skin diseases associated with aging.

  11. Topobathymetric LiDAR point cloud processing and landform classification in a tidal environment

    NASA Astrophysics Data System (ADS)

    Skovgaard Andersen, Mikkel; Al-Hamdani, Zyad; Steinbacher, Frank; Rolighed Larsen, Laurids; Brandbyge Ernstsen, Verner

    2017-04-01

    Historically it has been difficult to create high resolution Digital Elevation Models (DEMs) in land-water transition zones due to shallow water depth and often challenging environmental conditions. This gap of information has been reflected as a "white ribbon" with no data in the land-water transition zone. In recent years, the technology of airborne topobathymetric Light Detection and Ranging (LiDAR) has proven capable of filling out the gap by simultaneously capturing topographic and bathymetric elevation information, using only a single green laser. We collected green LiDAR point cloud data in the Knudedyb tidal inlet system in the Danish Wadden Sea in spring 2014. Creating a DEM from a point cloud requires the general processing steps of data filtering, water surface detection and refraction correction. However, there is no transparent and reproducible method for processing green LiDAR data into a DEM, specifically regarding the procedure of water surface detection and modelling. We developed a step-by-step procedure for creating a DEM from raw green LiDAR point cloud data, including a procedure for making a Digital Water Surface Model (DWSM) (see Andersen et al., 2017). Two different classification analyses were applied to the high resolution DEM: A geomorphometric and a morphological classification, respectively. The classification methods were originally developed for a small test area; but in this work, we have used the classification methods to classify the complete Knudedyb tidal inlet system. References Andersen MS, Gergely Á, Al-Hamdani Z, Steinbacher F, Larsen LR, Ernstsen VB (2017). Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment. Hydrol. Earth Syst. Sci., 21: 43-63, doi:10.5194/hess-21-43-2017. Acknowledgements This work was funded by the Danish Council for Independent Research | Natural Sciences through the project "Process-based understanding and prediction of morphodynamics in a natural coastal system in response to climate change" (Steno Grant no. 10-081102) and by the Geocenter Denmark through the project "Closing the gap! - Coherent land-water environmental mapping (LAWA)" (Grant no. 4-2015).

  12. LABORATORY-SCALE ANALYSIS OF AQUIFER REMEDIATION BY IN-WELL VAPOR STRIPPING 2. MODELING RESULTS. (R825689C061)

    EPA Science Inventory

    Abstract

    The removal of volatile organic compounds (VOCs) from groundwater through in-well vapor stripping has been demonstrated by Gonen and Gvirtzman (1997, J. Contam. Hydrol., 00: 000-000) at the laboratory scale. The present study compares experimental breakthrough...

  13. Stochastic Generation of Monthly Rainfall Data

    NASA Astrophysics Data System (ADS)

    Srikanthan, R.

    2009-03-01

    Monthly rainfall data is generally needed in the simulation of water resources systems, and in the estimation of water yield from large catchments. Monthly streamflow data generation models are usually applied to generate monthly rainfall data, but this presents problems for most regions, which have significant months of no rainfall. In an earlier study, Srikanthan et al. (J. Hydrol. Eng., ASCE 11(3) (2006) 222-229) recommended the modified method of fragments to disaggregate the annual rainfall data generated by a first-order autoregressive model. The main drawback of this approach is the occurrence of similar patterns when only a short length of historic data is available. Porter and Pink (Hydrol. Water Res. Symp. (1991) 187-191) used synthetic fragments from a Thomas-Fiering monthly model to overcome this drawback. As an alternative, a new two-part monthly model is nested in an annual model to generate monthly rainfall data which preserves both the monthly and annual characteristics. This nested model was applied to generate rainfall data from seven rainfall stations located in eastern and southern parts of Australia, and the results showed that the model performed satisfactorily.

  14. [Influence of incubation time on metabolites in mycelia of Paecilomyces militaris].

    PubMed

    Zhang, Delong; Li, Shulin; Lu, Ruili; Li, Kangle; Luo, Feifei; Peng, Fan; Hu, Fenglin

    2012-12-04

    To determine the secondary metabolites production in mycelia of Paecilomyces militaris. Mycelia were cultured in plates with sabouraud dextrose agar yeast medium at 25 degrees C for 9 days. Sampling was done every day from the second to the ninth day. The secondary metabolites in the mycelia of Paecilomyces militaris were extracted with either methanol or ethyl acetate. The extracts were blended and analyzed by liquid chromatography-mass spectrometry (LC-MS). LC-MS data were collected and analyzed by MetaboAnalyst software. Principal component analysis indicates different secondary metabolites accumulation with incubation times. Hierarchical clustering analysis shows that the metabolic process of cationic compounds such as alkaloids, peptides and nucleosides can be divided into three stages, and that the metabolic process of anionic compounds such as organic acids and saccharides can be divided into two stages. Metabolites difference and heat map analysis show that: (1) The number of metabolites with significant increased contents was raised significantly in mycelia of Paecilomyces militaris on the second and third incubation days. The main species with increased contents were esters and their hydrolized products, destruxin B, variotin and some unidentified nitrogin contained compounds. (2) The number of metabolites with significant raised contents was decreased significantly on the fourth and fifth incubation days. The main species with increased contents were ophiocordin and destruxin A. (3) Apart from peptide antibiotics such as several beauverolides, the content increased metabolites included also several organic acids, amino acids, rhamnose, trehalose, cerebroside and riboflavine during the sixth to ninth incubation days. The secondary metabolites in mycelia of Paecilomyces militaris were related significantly to the incubation time.

  15. An interactive modelling tool for understanding hydrological processes in lowland catchments

    NASA Astrophysics Data System (ADS)

    Brauer, Claudia; Torfs, Paul; Uijlenhoet, Remko

    2016-04-01

    Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS), a rainfall-runoff model for catchments with shallow groundwater (Brauer et al., 2014ab). WALRUS explicitly simulates processes which are important in lowland catchments, such as feedbacks between saturated and unsaturated zone and between groundwater and surface water. WALRUS has a simple model structure and few parameters with physical connotations. Some default functions (which can be changed easily for research purposes) are implemented to facilitate application by practitioners and students. The effect of water management on hydrological variables can be simulated explicitly. The model description and applications are published in open access journals (Brauer et al, 2014). The open source code (provided as R package) and manual can be downloaded freely (www.github.com/ClaudiaBrauer/WALRUS). We organised a short course for Dutch water managers and consultants to become acquainted with WALRUS. We are now adapting this course as a stand-alone tutorial suitable for a varied, international audience. In addition, simple models can aid teachers to explain hydrological principles effectively. We used WALRUS to generate examples for simple interactive tools, which we will present at the EGU General Assembly. C.C. Brauer, A.J. Teuling, P.J.J.F. Torfs, R. Uijlenhoet (2014a): The Wageningen Lowland Runoff Simulator (WALRUS): a lumped rainfall-runoff model for catchments with shallow groundwater, Geosci. Model Dev., 7, 2313-2332. C.C. Brauer, P.J.J.F. Torfs, A.J. Teuling, R. Uijlenhoet (2014b): The Wageningen Lowland Runoff Simulator (WALRUS): application to the Hupsel Brook catchment and Cabauw polder, Hydrol. Earth Syst. Sci., 18, 4007-4028.

  16. Ground Albedo Neutron Sensing (GANS) for Measurement of Integral Soil Water Content at the Small Catchment Scale

    NASA Astrophysics Data System (ADS)

    Rivera Villarreyes, C.; Baroni, G.; Oswald, S. E.

    2012-12-01

    Soil water content at the plot or hill-slope scale is an important link between local vadose zone hydrology and catchment hydrology. One largest initiative to cover the measuring gap of soil moisture between point scale and remote sensing observations is the COSMOS network (Zreda et al., 2012). Here, cosmic-ray neutron sensing, which may be more precisely named ground albedo neutron sensing (GANS), is applied. The measuring principle is based on the crucial role of hydrogen as neutron moderator compared to others landscape materials. Soil water content contained in a footprint of ca. 600 m diameter and a depth ranging down to a few decimeters is inversely correlated to the neutron flux at the air-ground interface. This approach is now implemented, e.g. in USA (Zreda et al., 2012) and Germany (Rivera Villarreyes et al., 2011), based on its simple installation and integral measurement of soil moisture at the small catchment scale. The present study performed Ground Albedo Neutron Sensing on farmland at two locations in Germany under different vegetative situations (cropped and bare field) and different seasonal conditions (summer, autumn and winter). Ground albedo neutrons were measured at (i) a farmland close to Potsdam and Berlin cropped with corn in 2010, sunflower in 2011 and winter rye in 2012, and (ii) a mountainous farmland catchment (Schaefertal, Harz Mountains) since middle 2011. In order to test this methodology, classical soil moisture devices and meteorological data were used for comparison. Moreover, several calibration approaches, role of vegetation cover and transferability of calibration parameters to different times and locations were also evaluated. Observations suggest that GANS can overcome the lack of data for hydrological processes at the intermediate scale. Soil moisture from GANS compared quantitatively with mean values derived from a network of classical devices under vegetated and non- vegetated conditions. The GANS approach responded well to precipitation events through summer and autumn, but soil water content estimations were affected by water stored in snow and partly biomass. Thus, when calibration parameters were transferred to different crops (e.g. from sunflower to rye), the changes in biomass water will have to be considered. Finally, these results imply that GANS measurements can be a reliable ground-truthing possibility as well as additional constraint for hydrological models. References (1) Rivera Villarreyes, C.A., Baroni, G., and Oswald, S.E. (2011): Integral quantification of seasonal soil moisture changes in farmland by cosmic-ray neutrons, Hydrol. Earth Syst. Sci., 15, 3843-3859. (2) Rivera Villarreyes, C.A., Baroni, G., and Oswald, S.E. (2012): Evaluation of the Ground Albedo Neutron Sensing (GANS) method for soil moisture estimations in different crop fields (in preparation for Hydrological Processes). (3) Zreda, M., Shuttleworth, W.J., Zeng, X., Zweck, C., Desilets, D., Franz, T., Rosolem, R., and Ferre, T.P.A. (2012): COSMOS: The COsmic-ray Soil Moisture Observing System. Hydrol. Earth Syst. Sci. Discuss., 9, 4505-4551.

  17. Finding diversity for building one-day ahead Hydrological Ensemble Prediction System based on artificial neural network stacks

    NASA Astrophysics Data System (ADS)

    Brochero, Darwin; Anctil, Francois; Gagné, Christian; López, Karol

    2013-04-01

    In this study, we addressed the application of Artificial Neural Networks (ANN) in the context of Hydrological Ensemble Prediction Systems (HEPS). Such systems have become popular in the past years as a tool to include the forecast uncertainty in the decision making process. HEPS considers fundamentally the uncertainty cascade model [4] for uncertainty representation. Analogously, the machine learning community has proposed models of multiple classifier systems that take into account the variability in datasets, input space, model structures, and parametric configuration [3]. This approach is based primarily on the well-known "no free lunch theorem" [1]. Consequently, we propose a framework based on two separate but complementary topics: data stratification and input variable selection (IVS). Thus, we promote an ANN prediction stack in which each predictor is trained based on input spaces defined by the IVS application on different stratified sub-samples. All this, added to the inherent variability of classical ANN optimization, leads us to our ultimate goal: diversity in the prediction, defined as the complementarity of the individual predictors. The stratification application on the 12 basins used in this study, which originate from the second and third workshop of the MOPEX project [2], shows that the informativeness of the data is far more important than the quantity used for ANN training. Additionally, the input space variability leads to ANN stacks that outperform an ANN stack model trained with 100% of the available information but with a random selection of dataset used in the early stopping method (scenario R100P). The results show that from a deterministic view, the main advantage focuses on the efficient selection of the training information, which is an equally important concept for the calibration of conceptual hydrological models. On the other hand, the diversity achieved is reflected in a substantial improvement in the scores that define the probabilistic quality of the HEPS. Except one basin that shows an atypical behaviour, and two other basins that represent the difficulty of prediction in semiarid areas, the average gain obtained with the new scheme relative to the R100P scenario is around 8%, 134%, 72%, and 69% for the mean CRPS, the mean ignorance score, the MSE evaluated on the reliability diagram, and the delta ratio respectively. Note that in all cases, the CRPS is less than the MAE, which indicates that the ensemble of neural networks performs better when taken as a whole than when aggregated in a single averaged predictor. Finally, we consider appropriate to complement the proposed methodology in two fronts: one deterministic, in which prediction could come from a Bayesian combination, and the second probabilistic, in which scores optimization could be based on an "overproduce and select" process. Also, in the case of the basins in semiarid areas, the results found by Vos [5] with echo state networks using the same database analysed in this study, leads us to consider the need to include various structures in the ANN stack. References [1] Corne, D. W. and Knowles, J. D.: No free lunch and free leftovers theorems for multiobjective optimisation problems. in Proceedings of the 2nd international conference on Evolutionary multi-criterion optimization, Springer-Verlag, 327-341, 2003. [2] Duan, Q.; Schaake, J.; Andréassian, V.; Franks, S.; Goteti, G.; Gupta, H.; Gusev, Y.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T. and Wood, E.: Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops. J. Hydrol., 320, 3-17, 2006. [3] Kuncheva, L. I.: Combining Pattern Classifiers: Methods and Algorithms, Wiley-Interscience, 2004. [4] Pappenberger, F., Beven, K. J., Hunter, N. M., Bates, P. D., Gouweleeuw, B. T., Thielen, J., and de Roo, A. P. J.: Cascading model uncertainty from medium range weather forecasts (10 days) through a rainfall-runoff model to flood inundation predictions within the European Flood Forecasting System (EFFS), Hydrol. Earth Syst. Sci., 9, 381-393, 2005. [5] de Vos, N. J.: Reservoir computing as an alternative to traditional artificial neural networks in rainfall-runoff modelling Hydrol. Earth Syst. Sci. Discuss., 9, 6101-6134, 2012.

  18. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    NASA Astrophysics Data System (ADS)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002

  19. Modeling the potential impacts of climate change on the water table level of selected forested wetlands in the southeastern United States

    Treesearch

    Jie Zhu; Ge Sun; Wenhong Li; Yu Zhang; Guofang Miao; Asko Noormets; Steve G. McNulty; John S. King; Mukesh Kumar; Xuan Wang

    2017-01-01

    The southeastern United States hosts extensive forested wetlands, providing ecosystem services including carbon sequestration, water quality improvement, ground- water recharge, and wildlife habitat. However, these wet- land ecosystems are dependent on local climate and hydrol- ogy, and are therefore at risk due to climate and land use change. This study develops site-...

  20. Adaptability of 14 tree species to two hydrol humic latosol soils in Hawaii.

    Treesearch

    Craig O. Whitesell; Jr. Myron O. Ishennrood

    1971-01-01

    Tree species capable of thriving on soils in high rainfall areas are needed in Hawaii for reforestation. The soils are highly leached and infertile.Two native and 12 introduced tree species were planted at two sites to determine adaptability. Survival, growth, vigor, and form were appraised 1 to 7 years after planting. Performance varied-both within and between species...

  1. Preparation of glass-forming materials from granulated blast furnace slag

    NASA Astrophysics Data System (ADS)

    Alonso, M.; Sáinz, E.; Lopez, F. A.

    1996-10-01

    Glass precursor materials, to be used for the vitrification of hazardous wastes, have been prepared from blast furnace slag powder through a sol-gel route. The slag is initially reacted with a mixture of alcohol (ethanol or methanol) and mineral acid (HNO3 or H2SO4) to give a sol principally consisting of Si, Ca, Al, and Mg alkoxides. Gelation is carried out with variable amounts of either ammonia or water. The gelation rate can be made as fast as desired by adding excess hydrolizing agent or else by distilling the excess alcohol out of the alkoxide solution. The resulting gel is first dried at low temperature and ground. The powder thus obtained is then heat treated at several temperatures. The intermediate and final materials are characterized by thermal analysis, infrared (IR) spectroscopy, X-ray diffraction, scanning electron microscopy (SEM), and chemical analysis. From the results, the operating conditions yielding a variety of glass precursors differing in their composition are established. The method, in comparison with direct vitrification of slag, presents a number of advantages: (1) the glass precursor obtained devitrifies at higher temperatures; (2) it enables the adjustment, to a certain extent, of the chemical composition of the glass precursor; and (3) it permits recovering marketable materials at different stages of the process.

  2. What is the philosophy of modelling soil moisture movement?

    NASA Astrophysics Data System (ADS)

    Chen, J.; Wu, Y.

    2009-12-01

    In laboratory, the soil moisture movement in the different soil textures has been analysed. From field investigation, at a spot, the soil moisture movement in the root zone, vadose zone and shallow aquifer has been explored. In addition, on ground slopes, the interflow in the near surface soil layers has been studied. Along the regions near river reaches, the expansion and shrink of the saturated area due to rainfall occurrences have been observed. From those previous explorations regarding soil moisture movement, numerical models to represent this hydrologic process have been developed. However, generally, due to high heterogeneity and stratification of soil in a basin, modelling soil moisture movement is rather challenging. Normally, some empirical equations or artificial manipulation are employed to adjust the soil moisture movement in various numerical models. In this study, we inspect the soil moisture movement equations used in a watershed model, SWAT (Soil and Water Assessment Tool) (Neitsch et al., 2005), to examine the limitations of our knowledge in such a hydrologic process. Then, we adopt the features of a topographic-information based on a hydrologic model, TOPMODEL (Beven and Kirkby, 1979), to enhance the representation of soil moisture movement in SWAT. Basically, the results of the study reveal, to some extent, the philosophy of modelling soil moisture movement in numerical models, which will be presented in the conference. Beven, K.J. and Kirkby, M.J., 1979. A physically based variable contributing area model of basin hydrology. Hydrol. Science Bulletin, 24: 43-69. Neitsch, S.L., Arnold, J.G., Kiniry, J.R., Williams, J.R. and King, K.W., 2005. Soil and Water Assessment Tool Theoretical Documentation, Grassland, soil and research service, Temple, TX.

  3. Diclofenac degradation by heterogeneous photocatalysis with Fe3O4/Ti x O y /activated carbon fiber composite synthesized by ultrasound irradiation

    NASA Astrophysics Data System (ADS)

    Moreno-Valencia, E. I.; Paredes-Carrera, S. P.; Sánchez-Ochoa, J. C.; Flores-Valle, S. O.; Avendaño-Gómez, J. R.

    2017-11-01

    In this work, a photocatalytic system to degrade diclofenac was developed using a composite Fe3O4/Ti x O y on an activated carbon fiber. Diclofenac is widely used as an anti-inflammatory compound worldwide and it is constantly being added as waste in the environment (Heberer 2002 J. Hydrol. 266 175-89), exceeding the permissible maximum concentration in the wastewater (GEO-3 2002 Programa de las Naciones Unidas para el Medio Ambiente; Golet et al 2003 Environ. Sci. Technol. 37 3243-9 Oviedo et al 2010 Environ. Toxicol. Pharmacol. 29 9-43 Le-Minh et al 2010 Water Res. 44 4295-323 Legrini et al 1993 Chem. Rev. 1093 671-98). The composite was synthesized by sol-gel technique with and without ultrasound irradiation (Singh and Nakate 2014 J. Nanopart. 2014 326747). The solids were deposited by ultrasound irradiation on active carbon fiber in order to optimize the diclofenac degradation. The solids were characterized by x-ray diffraction (XRD), nitrogen physisorption (BET), and scanning electron microscopy with EDS microanalysis (SEM-EDS). The crystal size was calculated with the Debye-Scherrer equation, and the band gap values by the diffuse reflectance method. The evaluation process was studied by UV-vis spectroscopy (Rizzoa et al 2009 Water Res. 43 979-88). It was found that in this synthesis method (ultrasound), textural properties such as porosity, specific surface area and morphology depend on the ultrasound irradiation. The proposed system, Fe3O4/titanium oxide hydrate showed better degradation profile than TiO2 anatase phase; the increase of diclofenac degradation was attributed to the textural properties of the composite, it avoids the filtering process since the separation can be achieved by magnetizing and/or decantation.

  4. Advancing land surface model development with satellite-based Earth observations

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Dutra, Emanuel; Trigo, Isabel F.; Balsamo, Gianpaolo

    2017-04-01

    The land surface forms an essential part of the climate system. It interacts with the atmosphere through the exchange of water and energy and hence influences weather and climate, as well as their predictability. Correspondingly, the land surface model (LSM) is an essential part of any weather forecasting system. LSMs rely on partly poorly constrained parameters, due to sparse land surface observations. With the use of newly available land surface temperature observations, we show in this study that novel satellite-derived datasets help to improve LSM configuration, and hence can contribute to improved weather predictability. We use the Hydrology Tiled ECMWF Scheme of Surface Exchanges over Land (HTESSEL) and validate it comprehensively against an array of Earth observation reference datasets, including the new land surface temperature product. This reveals satisfactory model performance in terms of hydrology, but poor performance in terms of land surface temperature. This is due to inconsistencies of process representations in the model as identified from an analysis of perturbed parameter simulations. We show that HTESSEL can be more robustly calibrated with multiple instead of single reference datasets as this mitigates the impact of the structural inconsistencies. Finally, performing coupled global weather forecasts we find that a more robust calibration of HTESSEL also contributes to improved weather forecast skills. In summary, new satellite-based Earth observations are shown to enhance the multi-dataset calibration of LSMs, thereby improving the representation of insufficiently captured processes, advancing weather predictability and understanding of climate system feedbacks. Orth, R., E. Dutra, I. F. Trigo, and G. Balsamo (2016): Advancing land surface model development with satellite-based Earth observations. Hydrol. Earth Syst. Sci. Discuss., doi:10.5194/hess-2016-628

  5. A critical evaluation of the local-equilibrium assumption in modeling NAPL-pool dissolution

    NASA Astrophysics Data System (ADS)

    Seagren, Eric A.; Rittmann, Bruce E.; Valocchi, Albert J.

    1999-07-01

    An analytical modeling analysis was used to assess when local equilibrium (LE) and nonequilibrium (NE) modeling approaches may be appropriate for describing nonaqueous-phase liquid (NAPL) pool dissolution. NE mass-transfer between NAPL pools and groundwater is expected to affect the dissolution flux under conditions corresponding to values of Sh'St (the modified Sherwood number ( Lxkl/ Dz) multiplied by the Stanton number ( kl/ vx))<≈400. A small Sh'St can be brought about by one or more of: a large average pore water velocity ( vx), a large transverse dispersivity ( αz), a small pool length ( Lx), or a small mass-transfer coefficient ( kl). On the other hand, at Sh'St>≈400, the NE and LE solutions converge, and the LE assumption is appropriate. Based on typical groundwater conditions, many cases of interest are expected to fall in this range. The parameter with the greatest impact on Sh'St is kl. The NAPL pool mass-transfer coefficient correlation of Pfannkuch [Pfannkuch, H.-O., 1984. Determination of the contaminant source strength from mass exchange processes at the petroleum-ground-water interface in shallow aquifer systems. In: Proceedings of the NWWA/API Conference on Petroleum Hydrocarbons and Organic Chemicals in Ground Water—Prevention, Detection, and Restoration, Houston, TX. Natl. Water Well Assoc., Worthington, OH, Nov. 1984, pp. 111-129.] was evaluated using the toluene pool data from Seagren et al. [Seagren, E.A., Rittmann, B.E., Valocchi, A.J., 1998. An experimental investigation of NAPL-pool dissolution enhancement by flushing. J. Contam. Hydrol., accepted.]. Dissolution flux predictions made with kl calculated using the Pfannkuch correlation were similar to the LE model predictions, and deviated systematically from predictions made using the average overall kl=4.76 m/day estimated by Seagren et al. [Seagren, E.A., Rittmann, B.E., Valocchi, A.J., 1998. An experimental investigation of NAPL-pool dissolution enhancement by flushing. J. Contam. Hydrol., accepted.] and from the experimental data for vx>18 m/day. The Pfannkuch correlation kl was too large for vx>≈10 m/day, possibly because of the relatively low Peclet number data used by Pfannkuch [Pfannkuch, H.-O., 1984. Determination of the contaminant source strength from mass exchange processes at the petroleum-ground-water interface in shallow aquifer systems. In: Proceedings of the NWWA/API Conference on Petroleum Hydrocarbons and Organic Chemicals in Ground Water—Prevention, Detection, and Restoration, Houston, TX. Natl. Water Well Assoc., Worthington, OH, Nov. 1984, pp. 111-129.]. The results of the modeling analyses were evaluated by comparing pool dissolution fluxes from the literature to each other and to the corresponding LE and NE model predictions. The LE model described most of the pool dissolution flux data reasonably well, given the uncertainty in some of the model parameter estimates, suggesting that the LE model can be a useful tool for describing steady-state NAPL pool dissolution under some conditions. However, a conclusive test of the LE assumption was difficult due to the limited range of experimental conditions covered and the uncertainties in some of the model input parameters, including the mass-transfer coefficient correlation required for the NE model.

  6. Attribution of hydrological change using the Method of Multiple Working Hypotheses

    NASA Astrophysics Data System (ADS)

    Harrigan, Shaun

    2017-04-01

    The methods we have developed for managing our long-term water supply and protection from extreme hydrological events such as droughts and floods have been founded on the assumption that the hydrological cycle operates under natural conditions. However, it increasingly recognised that humans have the potential to induce significant change in almost every component of the hydrological cycle, for example, climate change, land-use change, and river engineering. Statistical detection of change in streamflow, outside that of natural variability, is an important scientific endeavour, but it does not tell us anything about the drivers of change. Attribution is the process of establishing the most likely cause(s) of a detected change - the why. Attribution is complex due to the integrated nature of streamflow and the proliferation of multiple possible drivers. It is perhaps this complexity, combined with few proven theoretical approaches to this problem in hydrology that has led to others to call for "more efforts and scientific rigour" (Merz et al., 2012). It is easier to limit the cause of a detected change to a single driver, or use simple correlation analysis alone as evidence of causation. It is convenient when the direction of a change in streamflow is consistent with what is expected from a well-known driver such as climate change. Over a century ago, Thomas Chamberlin argued these types of issues were common in many disciplines given how the scientific method is approached in general. His 1890 article introduces the Method of Multiple Working Hypotheses (MMWH) in an attempt to limit our confirmation bias and strives for increased objectivity. This presentation will argue that the MMWH offers an attractive theoretical approach to the attribution of hydrological change in modern hydrology as demonstrated through a case study of a well-documented change point in streamflow within the Boyne Catchment in Ireland. Further Reading Chamberlin, T. C.: The Method of Multiple Working Hypotheses, Science (old series), 15(366), 92-96, doi:10.1126/science.ns-15.366.92, 1890. Harrigan, S., Murphy, C., Hall, J., Wilby, R. L. and Sweeney, J.: Attribution of detected changes in streamflow using multiple working hypotheses, Hydrol. Earth Syst. Sci., 18(5), 1935-1952, doi:10.5194/hess-18-1935-2014, 2014. Merz, B., Vorogushyn, S., Uhlemann, S., Delgado, J. and Hundecha, Y.: HESS Opinions "More efforts and scientific rigour are needed to attribute trends in flood time series," Hydrol. Earth Syst. Sci., 16(5), 1379-1387, doi:10.5194/hess-16-1379-2012, 2012.

  7. Bio-ISRU Concepts using microorganisms to release O2 and H2 on Moon and Mars

    NASA Astrophysics Data System (ADS)

    Slenzka, Klaus; Kempf, Juergen

    Since space exploration missions begun, numerous spacecrafts were sent to space for examina-tion of other planets. One limiting factor of the endurance of such missions is the unlasting energy supply to run devices and motors of the space crafts as well as for locally habitats. The high weight and volume of fuels makes embedding of local resources necessary to allow ex-tension to long term missions. Nature demonstrates how to survive in extreme environments. Some more adapted microorganisms like Chlamydomonas reinhardii even release elementary hydrogen from water under special nutrition which might be used to run fuel cells and provide electric energy. The same organism release oxygen by photosysthesis under standard nutrition, the counterpart of hydrogen to operate fuel cells. Planets of interest are covered by potential toxic soil called "Regolith". Lunar regolith is known to be extremely aggressive and inhibit cells grows not only due to its sharp edges. First studies on lunar soil simulant tolerance of Chl.reinhardii have shown promising results. The single cells surround the substrate without any negative influence. A 3-dimensional tissue like matrix was build by the proliferating now adhering micro algae cells and the substrate. The photosynthesis rate was not negatively in-fluenced by the soil. This enables Chl.reinhardii to become a first settler organism of the lunar surface. Maybe a first step of terraforming to allow the growth of higher organisms. Lunar soil regolith consists of several components. Especially in minerals bound oxygen plays an out-standing role for industrial use. Some microorganisms of the proteobacteria type are reducing ferroxides to gain oxygen under anaerobic conditions while they produce electric energy simul-taneously. For a faster electron transfer the Shewanella bacteria built filamentous nanowire-like structures to connect one cell to the other. A bioreactor hosting specific microorganism might be run to provide oxygen to the life support system embedded in a permanent Moon or Mars base. This method demonstrates a low energetic oxygen release, a serious alternative to high the energetic oxygen separation of the ilmenite process, fluorination process, melting hydrol-ysis, vacuum distillation or photo dissociation, respectively. Not only oxygen production of the biological processes should be in focus of space application. Also the metal oxide reducing component of the process might run batteries to provide energy to devices of a Moon or Mars base.

  8. A method to calibrate channel friction and bathymetry parameters of a Sub-Grid hydraulic model using SAR flood images

    NASA Astrophysics Data System (ADS)

    Wood, M.; Neal, J. C.; Hostache, R.; Corato, G.; Chini, M.; Giustarini, L.; Matgen, P.; Wagener, T.; Bates, P. D.

    2015-12-01

    Synthetic Aperture Radar (SAR) satellites are capable of all-weather day and night observations that can discriminate between land and smooth open water surfaces over large scales. Because of this there has been much interest in the use of SAR satellite data to improve our understanding of water processes, in particular for fluvial flood inundation mechanisms. Past studies prove that integrating SAR derived data with hydraulic models can improve simulations of flooding. However while much of this work focusses on improving model channel roughness values or inflows in ungauged catchments, improvement of model bathymetry is often overlooked. The provision of good bathymetric data is critical to the performance of hydraulic models but there are only a small number of ways to obtain bathymetry information where no direct measurements exist. Spatially distributed river depths are also rarely available. We present a methodology for calibration of model average channel depth and roughness parameters concurrently using SAR images of flood extent and a Sub-Grid model utilising hydraulic geometry concepts. The methodology uses real data from the European Space Agency's archive of ENVISAT[1] Wide Swath Mode images of the River Severn between Worcester and Tewkesbury during flood peaks between 2007 and 2010. Historic ENVISAT WSM images are currently free and easy to access from archive but the methodology can be applied with any available SAR data. The approach makes use of the SAR image processing algorithm of Giustarini[2] et al. (2013) to generate binary flood maps. A unique feature of the calibration methodology is to also use parameter 'identifiability' to locate the parameters with higher accuracy from a pre-assigned range (adopting the DYNIA method proposed by Wagener[3] et al., 2003). [1] https://gpod.eo.esa.int/services/ [2] Giustarini. 2013. 'A Change Detection Approach to Flood Mapping in Urban Areas Using TerraSAR-X'. IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 4. [3] Wagener. 2003. 'Towards reduced uncertainty in conceptual rainfall-runoff modelling: Dynamic identifiability analysis'. Hydrol. Process. 17, 455-476.

  9. The total probabilities from high-resolution ensemble forecasting of floods

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2015-04-01

    Ensemble forecasting has for a long time been used in meteorological modelling, to give an indication of the uncertainty of the forecasts. As meteorological ensemble forecasts often show some bias and dispersion errors, there is a need for calibration and post-processing of the ensembles. Typical methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). To make optimal predictions of floods along the stream network in hydrology, we can easily use the ensemble members as input to the hydrological models. However, some of the post-processing methods will need modifications when regionalizing the forecasts outside the calibration locations, as done by Hemri et al. (2013). We present a method for spatial regionalization of the post-processed forecasts based on EMOS and top-kriging (Skøien et al., 2006). We will also look into different methods for handling the non-normality of runoff and the effect on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005. Skøien, J. O., Merz, R. and Blöschl, G.: Top-kriging - Geostatistics on stream networks, Hydrol. Earth Syst. Sci., 10(2), 277-287, 2006.

  10. A multidisciplinary investigation of groundwater fluctuations and their control on river chemistry - Insights from river dissolved concentrations and Li isotopes during flood events

    NASA Astrophysics Data System (ADS)

    Kuessner, M.; Bouchez, J.; Dangeard, M.; Bodet, L.; Thiesson, J.; Didon-Lescot, J. F.; Frick, D. A.; Grard, N.; Guérin, R.; Domergue, J. M.; Gaillardet, J.

    2017-12-01

    Water flow exerts a strong control on weathering reactions in the Critical Zone (CZ). The relationships between hydrology and river chemistry have been widely studied for the past decades [1]. Solute export responds strongly to storm events [2] and investigating the concentration and isotope composition of trace elements in river catchments can advance our understanding of the processes governing water-rock interactions and provide information on the water flow paths during these "hot moments". Especially, lithium (Li) and its isotopes are sensitive to the balance between mineral dissolution and precipitation in the subsurface and therefore, a powerful tool to characterize the response of chemical weathering to hydrology [3]. Hence, high-frequency stream chemistry yields valuable insight into the hydrological processes within the catchment during "hot moments". This study focuses on a CZ Observatory (OHMCV, part of French Research Infrastructure OZCAR). The granitic catchment Sapine (0.54 km2, southern France) is afflicted by big rain events and therefore, it is an appropriate location to study stormflows. Here we combine results from high-frequency stream water sampling during rain events with time-lapse seismic imaging to monitor the changes in aquifer properties [4]. The relationships between concentrations and discharge indicate differential responses of dissolved elements to the hydrological forcing. Especially, systematic changes are observed for Li and its isotopes as a function of water discharge, suggesting maximum secondary mineral formation at intermediate discharge. We suggest that Li dynamics are chiefly influenced by the depth at which water is flowing with, e.g. dissolution of primary minerals in deeper groundwater flows, and water-secondary mineral interaction at shallower depths. The combination of elemental concentrations and Li isotopes in river dissolved load tracing chemical weathering, with hydrogeophysical methods mapping water flows and pools, provides us with a time-resolved image of the CZ, improving our knowledge of the impact of hydrological changes on the chemical mass budgets in catchments. [1] Maher et al. (2011), Earth Planet. Sci. Lett. [2] Kirchner et al. (2010), Hydrol. Processes. [3] Liu et al. (2015), Earth Planet. Sci. Lett. [4] see poster by M. Dangeard et al.

  11. Renewable synthetic diesel fuel from triglycerides and organic waste materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hillard, J.C.; Strassburger, R.S.

    1986-03-01

    A renewable, synthetic diesel fuel has been developed that employs ethanol and organic waste materials. These organic materials, such as soybean oil or animal fats, are hydrolized to yield a mixture of solid soap like materials and glycerol. These soaps, now soluble in ethanol, are blended with ethanol; the glycerol is nitrated and added as well as castor oil when necessary. The synthetic fuel is tailored to match petroleum diesel fuel in viscosity, lubricity and cetane quality and, therefore, does not require any engine modifications. Testing in a laboratory engine and in a production Oldsmobile Cutlass has revealed that thismore » synthetic fuel is superior to petroleum diesel fuel in vehicle efficiency, cetane quality, combustion noise, cold start characteristics, exhaust odor and emissions. Performance characteristics are indistinguishable from those of petroleum diesel fuel. These soaps are added to improve the calorific value, lubricity and cetane quality of the ethanol. The glycerol from the hydrolysis process is nitrated and added to the ethanol as an additional cetane quality improver. Caster oil is added to the fuel when necessary to match the viscosity and lubricity of petroleum diesel fuel as well as to act as a corrosion inhibitor, thereby, precluding any engine modifications. The cetane quality of the synthetic fuel is better than that of petroleum diesel as the fuel carries its own oxygen. The synthetic fuel is also completely miscible with petroleum diesel.« less

  12. Chemical and Physical Properties, Safety and Application of Partially Hydrolized Guar Gum as Dietary Fiber

    PubMed Central

    Yoon, Seon-Joo; Chu, Djong-Chi; Raj Juneja, Lekh

    2008-01-01

    The ideal water-soluble dietary fiber for the fiber-enrichment of foods must be very low in viscosity, tasteless, odorless, and should produce clear solutions in beverages. Partially hydrolyzed guar gum (PHGG) produced from guar gum by enzymatic process has the same chemical structure with intact guar gum but less than one-tenth the original molecular length of guar gum, which make available to be used as film former, foam stabilizer and swelling agent. The viscosity of PHGG is about 10 mPa·s in 5% aqueous solution, whereas 1% solution of guar gum shows range from 2,000 to 3,000 mPa·s. In addition, PHGG is greatly stable against low pH, heat, acid and digestive enzyme. For these reasons, PHGG seems to be one of the most beneficial dietary fiber materials. It also showed that interesting physiological functions still fully exert the nutritional function of a dietary fiber. PHGG has, therefore, been used primarily for a nutritional purpose and became fully integrated food material without altering the rheology, taste, texture and color of final products. PHGG named as Benefiber® in USA has self-affirmation on GRAS status of standard grade PHGG. PHGG named as Sunfiber® is now being used in various beverages, food products and medicinal foods as a safe, natural and functional dietary fiber in all over the world. PMID:18231623

  13. Synthesis of Organic Compounds (Selected Articles)

    DTIC Science & Technology

    1990-10-03

    OH+TiCI,+4NH3 - (CH2+CHCH 2 0)4 Ti+4NH4 Cl Allyl ester of-orthotitanium acid is obtained for the first time [1]. The proposed method is based on the...reaction of allyl alcohol with titanium tetrachloride in the presence of ammonia in the medium of benzene. DESCRIPTION OF SYNTHESIS Synthesis is...point 141-1420 at I mm: it is hygroscopic and easily hydrolized. NOTE Synthesis can be carried out in the absence of benzene in the medium of allyl

  14. Debriding effect of bromelain on firearm wounds in pigs.

    PubMed

    Hu, Wei; Wang, Ai-Min; Wu, Si-Yu; Zhang, Bo; Liu, Shuai; Gou, Yuan-Bin; Wang, Jian-Min

    2011-10-01

    Wound excision is the standard treatment for firearm wounds. However, achieving a satisfactory curative effect is difficult because of the traumatic mechanism of high-velocity projectiles. We propose a new therapy by using topical bromelain as a supplement to wound incision for the debridement of firearm wounds. We clarified the debriding effect of bromelain on firearm wounds in pigs. In vitro, muscle tissues around the wound track and normal muscle were incubated in bromelain solutions of different concentrations. Tissue hydrolization was estimated by measuring tissue weight and the release of total amino acids. In vivo, the hind limbs of 15 pigs were wounded with high-velocity projectiles. Five groups were classified as follows: wound excision (E), wound incision (I), bromelain (B), incision + bromelain (IB), and control (C). Debriding effectiveness was estimated using bacterial content, histopathologic examination, and wound healing time. In vitro, hydrolization of wound tissue was significantly more intensive than that of normal tissue. Bromelain solution (10 mg/mL) hydrolyzed wound tissue rapidly with minimal proteolysis of normal tissue. In vivo, the wound-track bacterial content of group IB was similar to that of group E and was significantly lower than that of groups I, B, and C. The wound healing time of group IB was also shorter. Bromelain is effective in the debridement of uncomplicated firearm wounds if used as a supplement to simple wound incision. This new therapy shows notable advantages over conventional surgical debridement as it greatly simplifies the procedures.

  15. Modelling Soil Heat and Water Flow as a Coupled Process in Land Surface Models

    NASA Astrophysics Data System (ADS)

    García González, Raquel; Verhoef, Anne; Vidale, Pier Luigi; Braud, Isabelle

    2010-05-01

    To improve model estimates of soil water and heat flow by land surface models (LSMs), in particular in the first few centimetres of the near-surface soil profile, we have to consider in detail all the relevant physical processes involved (see e.g. Milly, 1982). Often, thermal and iso-thermal vapour fluxes in LSMs are neglected and the simplified Richard's equation is used as a result. Vapour transfer may affect the water fluxes and heat transfer in LSMs used for hydrometeorological and climate simulations. Processes occurring in the top 50 cm soil may be relevant for water and heat flux dynamics in the deeper layers, as well as for estimates of evapotranspiration and heterotrophic respiration, or even for climate and weather predictions. Water vapour transfer, which was not incorporated in previous versions of the MOSES/JULES model (Joint UK Land Environment Simulator; Cox et al., 1999), has now been implemented. Furthermore, we also assessed the effect of the soil vertical resolution on the simulated soil moisture and temperature profiles and the effect of the processes occurring at the upper boundary, mainly in terms of infiltration rates and evapotranspiration. SiSPAT (Simple Soil Plant Atmosphere Transfer Model; Braud et al., 1995) was initially used to quantify the changes that we expect to find when we introduce vapour transfer in JULES, involving parameters such as thermal vapour conductivity and diffusivity. Also, this approach allows us to compare JULES to a more complete and complex numerical model. Water vapour flux varied with soil texture, depth and soil moisture content, but overall our results suggested that water vapour fluxes change temperature gradients in the entire soil profile and introduce an overall surface cooling effect. Increasing the resolution smoothed and reduced temperature differences between liquid (L) and liquid/vapour (LV) simulations at all depths, and introduced a temperature increase over the entire soil profile. Thermal gradients rather than soil water potential gradients seem to cause temporal and spatial (vertical) soil temperature variability. We conclude that a multi-soil layer configuration may improve soil water dynamics, heat transfer and coupling of these processes, as well as evapotranspiration estimates and land surface-atmosphere coupling. However, a compromise should be reached between numerical and process-simulation aspects. References: Braud I., A.C. Dantas-Antonino, M. Vauclin, J.L. Thony and P. Ruelle, 1995b: A Simple Soil Plant Atmo- sphere Transfer model (SiSPAT), Development and field verification, J. Hydrol, 166: 213-250 Cox, P.M., R.A. Betts, C.B. Bunton, R.L.H. Essery, P.R. Rowntree, and J. Smith (1999), The impact of new land surface physics on the GCM simulation of climate and climate sensitivity. Clim. Dyn., 15, 183-203. Milly, P.C.D., 1982. Moisture and heat transport in hysteric inhomogeneous porous media: a matric head- based formulation and a numerical model, Water Resour. Res., 18:489-498

  16. An analytical solution of groundwater level fluctuation in a U-shaped leaky coastal aquifer

    NASA Astrophysics Data System (ADS)

    Huang, Fu-Kuo; Chuang, Mo-Hsiung; Wang, Shu-chuan

    2017-04-01

    Tide-induced groundwater level fluctuations in coastal aquifers have attracted much attention in past years, especially for the issues associated with the impact of the coastline shape, multi-layered leaky aquifer system, and anisotropy of aquifers. In this study, a homogeneous but anisotropic multi-layered leaky aquifer system with U-shaped coastline is considered, where the subsurface system consisting of an unconfined aquifer, a leaky confined aquifer, and a semi-permeable layer between them. The analytical solution of the model obtained herein may be considered as an extended work of two solutions; one was developed by Huang et al. (Huang et al. Tide-induced groundwater level fluctuation in a U-shaped coastal aquifer, J. Hydrol. 2015; 530: 291-305) for two-dimensional interacting tidal waves bounded by three water-land boundaries while the other was by Li and Jiao (Li and Jiao. Tidal groundwater level fluctuations in L-shaped leaky coastal aquifer system, J. Hydrol. 2002; 268: 234-243) for two-dimensional interacting tidal waves of leaky coastal aquifer system adjacent to a cross-shore estuary. In this research, the effects of leakage and storativity of the semi-permeable layer on the amplitude and phase shift of the tidal head fluctuation, and the influence of anisotropy of the aquifer are all examined for the U-shaped leaky coastal aquifer. Some existing solutions in literatures can be regarded as the special cases of the present solution if the aquifer system is isotropic and non-leaky. The results obtained will be beneficial to coastal development and management for water resources.

  17. Observation and modelling of stable isotopes in precipitation for midlatitude weather systems in Melbourne, Australia

    NASA Astrophysics Data System (ADS)

    Barras, Vaughan; Simmonds, Ian

    2010-05-01

    The application of stable water isotopes as tracers of moisture throughout the hydrological cycle is often hindered by the relatively coarse temporal and spatial resolution of observational data. Intensive observation periods (IOPs) of isotopes in precipitation have been valuable in this regard enabling the quantification of the effects of vapour recycling, convection, cloud top height and droplet reevaporation (Dansgaard, 1953; Miyake et al., 1968; Gedzelman and Lawrence, 1982; 1990; Pionke and DeWalle, 1992; Risi et al., 2008; 2009) and have been used as a basis to develop isotope models of varying complexity (Lee and Fung, 2008; Bony et al., 2008). This study took a unified approach combining observation and modelling of stable isotopes in precipitation in an investigation of three key circulation types that typically bring rainfall to southeastern Australia. The observational component of this study involved the establishment of the Melbourne University Network of Isotopes in Precipitation (MUNIP). MUNIP was devised to sample rainwater simultaneously at a number of collection sites across greater Melbourne to record the spatial and temporal isotopic variability of precipitation during the passage of particular events. Samples were collected at half-hourly intervals for three specific rain events referred to as (1) mixed-frontal, (2) convective, and (3) stratiform. It was found that the isotopic content for each event varied over both high and low frequencies due to influences from local changes in rain intensity and large scale rainout respectively. Of particular note was a positive relationship between deuterium excess and rainfall amount under convective conditions. This association was less well defined for stratiform rainfall. As a supplement to the data coverage of the observations, the events were simulated using a version of NCAR CAM3 running with an isotope hydrology scheme. This was done by periodically nudging the model dynamics with data from the NCEP Reanalysis (Noone, 2006). Results from the simulations showed that the model represented well the large scale evolution of vapour profiles of deuterium excess and 18O for the mixed-frontal and stratiform events. Reconstruction of air mass trajectories provided further detail of the evolution and structure of the vapour profiles revealing a convergence of air masses from different source regions for the mixed-frontal event. By combining observations and modelling in this way, much detail of the structure and isotope moisture history of the observed events was provided that would be unavailable from the sampling of precipitation alone. References Bony, S., C. Risi, and F. Vimeux (2008), Influence of convective processes on the isotopic composition (?18O and ?D) of precipitation and water vapor in the tropics: 1. Radiative-convective equilibrium and Tropical Ocean-Global Atmosphere-Coupled Ocean-Atmosphere Response (TOGA-COARE) simulations, J. Geophys. Res., 113, D19305, doi:10.1029/2008JD009942. Dansgaard, W. (1953), The abundance of 18O in atmospheric water and water vapor. Tellus, 5, 461-469. Gedzelman, S. D., and J. R. Lawrence (1982), The isotopic composition of cyclonic precipitation. J. App. Met., 21, 1385-1404. Gedzelman, S. D., and J. R. Lawrence (1990), The isotopic composition of precipitation from two extratropical cyclones, Mon. Weather Rev., 118 , 495-509. Lee, J., and I. Fung (2008), 'Amount effect' of water isotopes and quantitative analysis of post-condensation processes, Hydrol. Process., 22, 1-8. Miyake, Y., O. Matsubaya, and C. Nishihara (1968), An isotopic study on meteoric precipitation, Pap. Meteorol. Geophys., 19, 243-266. Noone, D. (2006), Isotopic composition of water vapor modeled by constraining global climate simulations with reanalyses, in Research activities in atmospheric and oceanic modeling, J. Côté (ed.), Report No. 36, WMO/TD-No. 1347, p. 2.37-2.38. Pionke, H. B., and D. R. DeWalle (1992), Intra- and inter-storm 18O trends for selected rainstorms in Pennsylvania. J. Hydrol., 138, 131-143. Risi, C., S. Bony, and F. Vimeux (2008), Influence of convective processes on the isotopic composition (?18O and ?D) of precipitation and water vapor in the tropics: 2. Physical interpretation of the amount effect. J. Geophys. Res., 113, D19306, doi:10.1029/2008JD009943. Risi, C., S. Bony, F. Vimeux, M. Chong, and L. Descroix (2009), Evolution of the water stable isotopic composition of the rain sampled along Sahelian squall lines, Q. J. Roy. Meteor. Soc., doi:10.1002/qj.485, (in press).

  18. One-day-ahead streamflow forecasting via super-ensembles of several neural network architectures based on the Multi-Level Diversity Model

    NASA Astrophysics Data System (ADS)

    Brochero, Darwin; Hajji, Islem; Pina, Jasson; Plana, Queralt; Sylvain, Jean-Daniel; Vergeynst, Jenna; Anctil, Francois

    2015-04-01

    Theories about generalization error with ensembles are mainly based on the diversity concept, which promotes resorting to many members of different properties to support mutually agreeable decisions. Kuncheva (2004) proposed the Multi Level Diversity Model (MLDM) to promote diversity in model ensembles, combining different data subsets, input subsets, models, parameters, and including a combiner level in order to optimize the final ensemble. This work tests the hypothesis about the minimisation of the generalization error with ensembles of Neural Network (NN) structures. We used the MLDM to evaluate two different scenarios: (i) ensembles from a same NN architecture, and (ii) a super-ensemble built by a combination of sub-ensembles of many NN architectures. The time series used correspond to the 12 basins of the MOdel Parameter Estimation eXperiment (MOPEX) project that were used by Duan et al. (2006) and Vos (2013) as benchmark. Six architectures are evaluated: FeedForward NN (FFNN) trained with the Levenberg Marquardt algorithm (Hagan et al., 1996), FFNN trained with SCE (Duan et al., 1993), Recurrent NN trained with a complex method (Weins et al., 2008), Dynamic NARX NN (Leontaritis and Billings, 1985), Echo State Network (ESN), and leak integrator neuron (L-ESN) (Lukosevicius and Jaeger, 2009). Each architecture performs separately an Input Variable Selection (IVS) according to a forward stepwise selection (Anctil et al., 2009) using mean square error as objective function. Post-processing by Predictor Stepwise Selection (PSS) of the super-ensemble has been done following the method proposed by Brochero et al. (2011). IVS results showed that the lagged stream flow, lagged precipitation, and Standardized Precipitation Index (SPI) (McKee et al., 1993) were the most relevant variables. They were respectively selected as one of the firsts three selected variables in 66, 45, and 28 of the 72 scenarios. A relationship between aridity index (Arora, 2002) and NN performance showed that wet basins are more easily modelled than dry basins. Nash-Sutcliffe (NS) Efficiency criterion was used to evaluate the performance of the models. Test results showed that in 9 of the 12 basins, the mean sub-ensembles performance was better than the one presented by Vos (2013). Furthermore, in 55 of 72 cases (6 NN structures x 12 basins) the mean sub-ensemble performance was better than the best individual performance, and in 10 basins the performance of the mean super-ensemble was better than the best individual super-ensemble member. As well, it was identified that members of ESN and L-ESN sub-ensembles have very similar and good performance values. Regarding the mean super-ensemble performance, we obtained an average gain in performance of 17%, and found that PSS preserves sub-ensemble members from different NN structures, indicating the pertinence of diversity in the super-ensemble. Moreover, it was demonstrated that around 100 predictors from the different structures are enough to optimize the super-ensemble. Although sub-ensembles of FFNN-SCE showed unstable performances, FFNN-SCE members were picked-up several times in the final predictor selection. References Anctil, F., M. Filion, and J. Tournebize (2009). "A neural network experiment on the simulation of daily nitrate-nitrogen and suspended sediment fluxes from a small agricultural catchment". In: Ecol. Model. 220.6, pp. 879-887. Arora, V. K. (2002). "The use of the aridity index to assess climate change effect on annual runoff". In: J. Hydrol. 265.164, pp. 164 -177 . Brochero, D., F. Anctil, and C. Gagn'e (2011). "Simplifying a hydrological ensemble prediction system with a backward greedy selection of members Part 1: Optimization criteria". In: Hydrol. Earth Syst. Sci. 15.11, pp. 3307-3325. Duan, Q., J. Schaake, V. Andr'eassian, S. Franks, G. Goteti, H. Gupta, Y. Gusev, F. Habets, A. Hall, L. Hay, T. Hogue, M. Huang, G. Leavesley, X. Liang, O. Nasonova, J. Noilhan, L. Oudin, S. Sorooshian, T. Wagener, and E. Wood (2006). "Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops". In: J. Hydrol. 320.12, pp. 3-17. Duan, Q., V. Gupta, and S. Sorooshian (1993). "Shuffled complex evolution approach for effective and efficient global minimization". In: J. Optimiz. Theory App. 76.3, pp. 501-521. Hagan, M. T., H. B. Demuth, and M. Beale (1996). Neural network design . 1st ed. PWS Publishing Co., p. 730. Kuncheva, L. I. (2004). Combining Pattern Classifiers: Methods and Algorithms . Wiley-Interscience, p. 350. Leontaritis, I. and S. Billings (1985). "Input-output parametric models for non-linear systems Part I: deterministic non-linear systems". In: International Journal of Control 41.2, pp. 303-328. Lukosevicius, M. and H. Jaeger (2009). "Reservoir computing approaches to recurrent neural network training". In: Computer Science Review 3.3, pp. 127-149. McKee, T., N. Doesken, and J. Kleist (1993). The Relationship of Drought Frequency and Duration to Time Scales . In: Eighth Conference on Applied Climatology. Vos, N. J. de (2013). "Echo state networks as an alternative to traditional artificial neural networks in rainfall-runoff modelling". In: Hydrol. Earth Syst. Sci. 17.1, pp. 253-267. Weins, T., R. Burton, G. Schoenau, and D. Bitner (2008). Recursive Generalized Neural Networks (RGNN) for the Modeling of a Load Sensing Pump. In: ASME Joint Conference on Fluid Power, Transmission and Control.

  19. Detection of dominant runoff generation processes for catchment classification

    NASA Astrophysics Data System (ADS)

    Gioia, A.; Manfreda, S.; Iacobellis, V.; Fiorentino, M.

    2009-04-01

    The identification of similar hydroclimatic regions in order to reduce the uncertainty on flood prediction in ungauged basins, represents one of the most exciting challenges faced by hydrologists in the last few years (e.g., IAHS Decade on Predictions in Ungauged Basins (PUB) - Sivapalan et al. [2003]). In this context, the investigation of the dominant runoff generation mechanisms may provide a strategy for catchment classification and identification of hydrologically homogeneous group of basins. In particular, the present study focuses on two classical schemes responsible of runoff production: saturation and infiltration excess. Thus, in principle, the occurrence of either mechanism may be detected in the same basin according to the climatic forcing. Here the dynamics of runoff generation are investigated over a set of basins in order to identify the dynamics which are responsible of the transition between the two schemes and to recognize homogeneous group of basins. We exploit a basin characterization obtained by means of a theoretical flood probability distribution, which was applied on a broad number of arid and humid river basins belonging to the Southern Italy region, with aim to describe the effect of different runoff production mechanisms in the generation of ordinary and extraordinary flood events. Sivapalan, M., Takeuchi, K., Franks, S. W., Gupta, V. K., Karambiri, H., Lakshmi, V., Liang, X., McDonnell, J. J., Mendiondo, E. M., O'Connell, P. E., Oki, T., Pomeroy, J. W., Schertzer, D., Uhlenbrook, S. and Zehe, E.: IAHS Decade on Predictions in Ungauged Basins (PUB), 2003-2012: Shaping an exciting future for the hydrological sciences, Hydrol. Sci. J., 48(6), 857-880, 2003.

  20. Progression of natural attenuation processes at a crude oil spill site: II. Controls on spatial distribution of microbial populations

    NASA Astrophysics Data System (ADS)

    Bekins, Barbara A.; Cozzarelli, Isabelle M.; Godsy, E. Michael; Warren, Ean; Essaid, Hedeff I.; Tuccillo, Mary Ellen

    2001-12-01

    A multidisciplinary study of a crude-oil contaminated aquifer shows that the distribution of microbial physiologic types is strongly controlled by the aquifer properties and crude oil location. The microbial populations of four physiologic types were analyzed together with permeability, pore-water chemistry, nonaqueous oil content, and extractable sediment iron. Microbial data from three vertical profiles through the anaerobic portion of the contaminated aquifer clearly show areas that have progressed from iron-reduction to methanogenesis. These locations contain lower numbers of iron reducers, and increased numbers of fermenters with detectable methanogens. Methanogenic conditions exist both in the area contaminated by nonaqueous oil and also below the oil where high hydrocarbon concentrations correspond to local increases in aquifer permeability. The results indicate that high contaminant flux either from local dissolution or by advective transport plays a key role in determining which areas first become methanogenic. Other factors besides flux that are important include the sediment Fe(II) content and proximity to the water table. In locations near a seasonally oscillating water table, methanogenic conditions exist only below the lowest typical water table elevation. During 20 years since the oil spill occurred, a laterally continuous methanogenic zone has developed along a narrow horizon extending from the source area to 50-60 m downgradient. A companion paper [J. Contam. Hydrol. 53, 369-386] documents how the growth of the methanogenic zone results in expansion of the aquifer volume contaminated with the highest concentrations of benzene, toluene, ethylbenzene, and xylenes.

  1. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    NASA Astrophysics Data System (ADS)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic processes. Also, we explore the variability in hillslope length scales as a function of hillslope diffusivity coefficients and critical shear stress in natural landscapes and show that we can infer signatures of dominant geomorphic processes by analyzing characteristic topographic length scales present in topography. References: Beven, K. and Kirkby, M. J.: A physically based variable contributing area model of basin hydrology, Hydrol. Sci. Bull., 24, 43-69, 1979 Howard, A. D. (1994). A detachment-limited model of drainage basin evolution.Water resources research, 30(7), 2261-2285. Passalacqua, P., Do Trung, T., Foufoula Georgiou, E., Sapiro, G., & Dietrich, W. E. (2010). A geometric framework for channel network extraction from lidar: Nonlinear diffusion and geodesic paths. Journal of Geophysical. Research: Earth Surface (2003-2012), 115(F1). Sangireddy, H., Passalacqua, P., Stark, C.P.(2012). Multi-resolution estimation of lidar-DTM surface flow metrics to identify characteristic topographic length scales, EP13C-0859: AGU Fall meeting 2012. Stark, C. P., & Stark, G. J. (2001). A channelization model of landscape evolution. American Journal of Science, 301(4-5), 486-512. Tucker, G. E., Catani, F., Rinaldo, A., & Bras, R. L. (2001). Statistical analysis of drainage density from digital terrain data. Geomorphology, 36(3), 187-202.

  2. Conceptualizing Peatlands in a Physically-Based Spatially Distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Downer, Charles; Wahl, Mark

    2017-04-01

    In as part of a research effort focused on climate change effects on permafrost near Fairbanks, Alaska, it became apparent that peat soils, overlain by thick sphagnum moss, had a considerable effect on the overall hydrology. Peatlands represent a confounding mixture of vegetation, soils, and water that present challenges for conceptualizing and parametrizing hydrologic models. We employed the Gridded Surface Subsurface Hydrologic Analysis Model (GSSHA) in our analysis of the Caribou Poker Creek Experimental Watershed (CPCRW). GSSHA is a physically-based, spatially distributed, watershed model developed by the U.S. Army to simulate important streamflow-generating processes (Downer and Ogden, 2004). The model enables simulation of surface water and groundwater interactions, as well as soil temperature and frozen ground effects on subsurface water movement. The test site is a 104 km2 basin located in the Yukon-Tanana Uplands of the Northern Plateaus Physiographic Province centered on 65˚10' N latitude and 147˚30' W longitude. The area lies above the Chattanika River floodplain and is characterized by rounded hilltops with gentle slopes and alluvium-floored valleys having minimal relief (Wahrhaftig, 1965) underlain by a mica shist of the Birch Creek formation (Rieger et al., 1972). The region has a cold continental climate characterized by short warm summers and long cold winters. Observed stream flows indicated significant groundwater contribution with sustained base flows even during dry periods. A site visit exposed the presence of surface water flows indicating a mixed basin that would require both surface and subsurface simulation capability to properly capture the response. Soils in the watershed are predominately silt loam underlain by shallow fractured bedrock. Throughout much of the basin, a thick layer of live sphagnum moss and fine peat covers the ground surface. A restrictive layer of permafrost is found on north facing slopes. The combination of thick moss and peat soils presented a conundrum in terms of conceptualizing the hydrology and identifying reasonable parameter ranges for physical properties. Various combinations of overland roughness, surface retention, and subsurface flow were used to represent the peatlands. The process resulted in some interesting results that may shed light on the dominant hydrologic processes associated with peatland, as well as what hydrologic conceptualizations, simulation tools, and approaches are applicable in modeling peatland hydrology. Downer, C.W., Ogden, F.L., 2004. GSSHA: Model to simulate diverse stream flow producing processes. J. Hydrol. Eng. 161-174. Rieger, S., Furbush, C.E., Schoephorster, D.B., Summerfield Jr., H., Geiger, L.C., 1972. Soils of the Caribou-Poker Creeks Research Watershed, Interior Alaska. Hanover, New Hampshire. Wahrhaftig, C., 1965. Physiographic Divisions of Alaska. Washington, DC.

  3. Stepwise calibration procedure for regional coupled hydrological-hydrogeological models

    NASA Astrophysics Data System (ADS)

    Labarthe, Baptiste; Abasq, Lena; de Fouquet, Chantal; Flipo, Nicolas

    2014-05-01

    Stream-aquifer interaction is a complex process depending on regional and local processes. Indeed, the groundwater component of hydrosystem and large scale heterogeneities control the regional flows towards the alluvial plains and the rivers. In second instance, the local distribution of the stream bed permeabilities controls the dynamics of stream-aquifer water fluxes within the alluvial plain, and therefore the near-river piezometric head distribution. In order to better understand the water circulation and pollutant transport in watersheds, the integration of these multi-dimensional processes in modelling platform has to be performed. Thus, the nested interfaces concept in continental hydrosystem modelling (where regional fluxes, simulated by large scale models, are imposed at local stream-aquifer interfaces) has been presented in Flipo et al (2014). This concept has been implemented in EauDyssée modelling platform for a large alluvial plain model (900km2) part of a 11000km2 multi-layer aquifer system, located in the Seine basin (France). The hydrosystem modelling platform is composed of four spatially distributed modules (Surface, Sub-surface, River and Groundwater), corresponding to four components of the terrestrial water cycle. Considering the large number of parameters to be inferred simultaneously, the calibration process of coupled models is highly computationally demanding and therefore hardly applicable to a real case study of 10000km2. In order to improve the efficiency of the calibration process, a stepwise calibration procedure is proposed. The stepwise methodology involves determining optimal parameters of all components of the coupled model, to provide a near optimum prior information for the global calibration. It starts with the surface component parameters calibration. The surface parameters are optimised based on the comparison between simulated and observed discharges (or filtered discharges) at various locations. Once the surface parameters have been determined, the groundwater component is calibrated. The calibration procedure is performed under steady state hypothesis (to minimize the procedure time length) using recharge rates given by the surface component calibration and imposed fluxes boundary conditions given by the regional model. The calibration is performed using pilot point where the prior variogram is calculated from observed transmissivities values. This procedure uses PEST (http//:www.pesthomepage.org/Home.php) as the inverse modelling tool and EauDyssée as the direct model. During the stepwise calibration process, each modules, even if they are actually dependant from each other, are run and calibrated independently, therefore contributions between each module have to be determined. For the surface module, groundwater and runoff contributions have been determined by hydrograph separation. Among the automated base-flow separation methods, the one-parameter Chapman filter (Chapman et al 1999) has been chosen. This filter is a decomposition of the actual base-flow between the previous base-flow and the discharge gradient weighted by functions of the recession coefficient. For the groundwater module, the recharge has been determined from surface and sub-surface module. References : Flipo, N., A. Mourhi, B. Labarthe, and S. Biancamaria (2014). Continental hydrosystem modelling : the concept of nested stream-aquifer interfaces. Hydrol. Earth Syst. Sci. Discuss. 11, 451-500. Chapman,TG. (1999). A comparison of algorithms for stream flow recession and base-flow separation. hydrological Processes 13, 701-714.

  4. Coupled charge migration and fluid mixing in reactive fronts

    NASA Astrophysics Data System (ADS)

    Ghosh, Uddipta; Bandopadhyay, Aditya; Jougnot, Damien; Le Borgne, Tanguy; Meheust, Yves

    2017-04-01

    Quantifying fluid mixing in subsurface environments and its consequence on biogeochemical reactions is of paramount importance owing to its role in processes such as contaminant migration, aquifer remediation, CO2 sequestration or clogging processes, to name a few (Dentz et al. 2011). The presence of strong velocity gradients in porous media is expected to lead to enhanced diffusive mixing and augmented reaction rates (Le Borgne et al. 2014). Accurate in situ imaging of subsurface reactive solute transport and mixing remains to date a challenging proposition: the opacity of the medium prevents optical imaging and field methods based on tracer tests do not provide spatial information. Recently developed geophysical methods based on the temporal monitoring of electrical conductivity and polarization have shown promises for mapping and monitoring biogeochemical reactions in the subsurface although it remains challenging to decipher the multiple sources of electrical signals (e.g. Knight et al. 2010). In this work, we explore the coupling between fluid mixing, reaction and charge migration in porous media to evaluate the potential of mapping reaction rates from electrical measurements. To this end, we develop a new theoretical framework based on a lamellar mixing model (Le Borgne et al. 2013) to quantify changes in electrical mobility induced by chemical reactions across mixing fronts. Electrical conductivity and induced polarization are strongly dependent on the concentration of ionic species, which in turn depend on the local reaction rates. Hence, our results suggest that variation in real and complex electrical conductivity may be quantitatively related to the mixing and reaction dynamics. Thus, the presented theory provides a novel upscaling framework for quantifying the coupling between mixing, reaction and charge migration in heterogeneous porous media flows. References: Dentz. et al., Mixing, spreading and reaction in heterogeneous media: A brief review J. Contam. Hydrol. 120-121, 1 (2011). Le Borgne et al. Impact of Fluid Deformation on Mixing-Induced Chemical Reactions in heterogeneous Flows, Geophys. Res. Lett. 41, 7898 (2014). Knight, et al., Geophysics at the interface: Response of geophysical properties to solid-fluid, fluid-fluid, and solid-solid interfaces. Rev. Geophys. 48, (2010). Le Borgne et al. (2013) Stretching, coalescence and mixing in porous media, Phys. Rev. Lett., 110, 204501

  5. Using TRMM and GPM precipitation radar for calibration of weather radars in the Philippines

    NASA Astrophysics Data System (ADS)

    Crisologo, Irene; Bookhagen, Bodo; Smith, Taylor; Heistermann, Maik

    2016-04-01

    Torrential and sustained rainfall from tropical cyclones, monsoons, and thunderstorms frequently impact the Philippines. In order to predict, assess, and measure storm impact, it is imperative to have a reliable and accurate monitoring system in place. In 2011, the Philippine Atmospheric, Geophysical, and Astronomical Services Administration (PAGASA) established a weather radar network of ten radar devices, eight of which are single-polarization S-band radars and two dual-polarization C-band radars. Because of a low-density hydrometeorological monitoring networks in the Philippines, calibration of weather radars becomes a challenging, but important task. In this study, we explore the potential of scrutinizing the calibration of ground radars by using the observations from the Tropical Rainfall Measuring Mission (TRMM). For this purpose, we compare different TRMM level 1 and 2 orbital products from overpasses over the Philippines, and compare these products to reflectivities observed by the Philippine ground radars. Differences in spatial resolution are addressed by computing adequate zonal statistics of the local radar bins located within the corresponding TRMM cell in space and time. The wradlib package (Heistermann et al. 2013; Heistermann et al. 2015) is used to process the data from the Subic S-band single-polarization weather radar. These data will be analyzed in conjunction with TRMM data for June to August 2012, three months of the wet season. This period includes the enhanced monsoon of 2012, locally called Habagat 2012, which brought sustained intense rainfall and massive floods in several parts of the country including the most populated city of Metro Manila. References Heistermann, M., Jacobi, S., Pfaff, T. (2013): Technical Note: An open source library for processing weather radar data (wradlib). Hydrol. Earth Syst. Sci., 17, 863-871, doi: 10.5194/hess-17-863-2013. Heistermann, M., S. Collis, M. J. Dixon, S. Giangrande, J. J. Helmus, B. Kelley, J. Koistinen, D. Michelson, M. Peura, T. Pfaff, D. B. Wolff (2015): The Emergence of Open Source Software for the Weather Radar Community, Bull. Amer. Meteor. Soc., doi: 10.1175/BAMS-D-13-00240.1

  6. Validation of catchment models for predicting land-use and climate change impacts. 1. Method

    NASA Astrophysics Data System (ADS)

    Ewen, J.; Parkin, G.

    1996-02-01

    Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).

  7. Morphodynamic modeling of erodible laminar channels.

    PubMed

    Devauchelle, Olivier; Josserand, Christophe; Lagrée, Pierre-Yves; Zaleski, Stéphane

    2007-11-01

    A two-dimensional model for the erosion generated by viscous free-surface flows, based on the shallow-water equations and the lubrication approximation, is presented. It has a family of self-similar solutions for straight erodible channels, with an aspect ratio that increases in time. It is also shown, through a simplified stability analysis, that a laminar river can generate various bar instabilities very similar to those observed in natural rivers. This theoretical similarity reflects the meandering and braiding tendencies of laminar rivers indicated by F. Métivier and P. Meunier [J. Hydrol. 27, 22 (2003)]. Finally, we propose a simple scenario for the transition between patterns observed in experimental erodible channels.

  8. Memory of the Lake Rotorua catchment - time lag of the water in the catchment and delayed arrival of contaminants from past land use activities

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Daughney, Christopher J.; Stewart, Michael K.; McDonnell, Jeffrey J.

    2013-04-01

    The transit time distribution of streamflow is a fundamental descriptor of the flowpaths of water through a catchment and the storage of water within it, controlling its response to landuse change, pollution, ecological degradation, and climate change. Significant time lags (catchment memory) in the responses of streams to these stressors and their amelioration or restoration have been observed. Lag time can be quantified via water transit time of the catchment discharge. Mean transit times can be in the order of years and decades (Stewart et al 2012, Morgenstern et al., 2010). If the water passes through large groundwater reservoirs, it is difficult to quantify and predict the lag time. A pulse shaped tracer that moves with the water can allow quantification of the mean transit time. Environmental tritium is the ideal tracer of the water cycle. Tritium is part of the water molecule, is not affected by chemical reactions in the aquifer, and the bomb tritium from the atmospheric nuclear weapons testing represents a pulse shaped tracer input that allows for very accurate measurement of the age distribution parameters of the water in the catchment discharge. Tritium time series data from all catchment discharges (streams and springs) into Lake Rotorua, New Zealand, allow for accurate determination of the age distribution parameters. The Lake Rotorua catchment tritium data from streams and springs are unique, with high-quality tritium data available over more than four decades, encompassing the time when the bomb-tritium moved through the groundwater system, and from a very high number of streams and springs. Together with the well-defined tritium input into the Rotorua catchment, this data set allows for the best understanding of the water dynamics through a large scale catchment, including validation of complicated water mixing models. Mean transit times of the main streams into the lake range between 27 and 170 years. With such old water discharging into the lake, most of the water inflows into the lake are not yet fully representing the nitrate loading in their sub-catchments from current land use practises. These water inflows are still 'diluted' by pristine old water, but over time, the full amount of nitrate load will arrive at the lake. With the age distribution parameters, it is possible to predict the increase in nitrate load to the lake via the groundwater discharges. All sub-catchments have different mean transit times. The mean transit times are not necessarily correlated with observable hydrogeologic properties like hydraulic conductivity and catchment size. Without such age tracer data, it is therefore difficult to predict mean transit times (lag times, memory) of water transfer through catchments. References: Stewart, M.K., Morgenstern, U., McDonnell, J.J., Pfister, L. (2012). The 'hidden streamflow' challenge in catchment hydrology: A call to action for streamwater transit time analysis. Hydrol. Process. 26,2061-2066, Invited commentary. DOI: 10.1002/hyp.9262 Morgenstern, U., Stewart, M.K., and Stenger, R. (2010) Dating of streamwater using tritium in a post nuclear bomb pulse world: continuous variation of mean transit time with streamflow, Hydrol. Earth Syst. Sci, 14, 2289-2301

  9. Does antecedent precipitation play a role for floods in (small) Swiss catchments?

    NASA Astrophysics Data System (ADS)

    Froidevaux, Paul; Schwanbeck, Jan; Weingartner, Rolf; Chevalier, Clément; Romppainen-Martius, Olivia

    2014-05-01

    River flooding is one of the most devastating natural hazards worldwide. In Switzerland, like in many other regions, the building of flood protection infrastructures is complicated by difficulties in assessing flood risk due to: - The large year-to-year variability in flood losses. The variations amount to several orders of magnitude (see for ex. Hilker et al., 2009). - The non-stationarity of the flood risk at longer time scales. A pronounced decadal variability in flood risk has been observed by Schmocker-Fackel and Naef (2010) and Köplin et al. (2013) show that climate change will induce diverse and complex regional changes in flood risk. A better understanding of flood processes is therefore required in order to better predict changes in flood frequency. It has been hypothesized that flood frequency variations are linked to changes in the atmospheric circulation. Consequently, the whole mechanism chain starting from atmospheric circulation patterns triggering severe precipitation weather and ending with extreme river discharge must be considered. In a step in that direction we characterize precipitation events that triggered observed annual maximum discharges at 120 discharge stations during the last 53 years in Switzerland. The precipitation dataset is a temporally-homogeneous complex interpolation of daily rain gauge data on a 1 by 1 km grid covering the Swiss territory (MeteoSwiss, 2011). We test the relationship between different catchment-averaged precipitation indices and flood occurrence. We explicitly separate antecedent and event-associated precipitation. The preliminary results show that antecedent precipitation (weekly to monthly sums ending 3 days before the event) are no significant flood predictors for most of the catchments. On the other hand, a very strong signal is found for the 1-3 days precipitation sums. Lessons for flood modeling in Swiss catchments is that a strong effort is required in order to represent the flood-associated weather events correctly over a 1-3 day period -particularly the precipitation amounts- whereas antecedent precipitation is not a necessary precondition for flood building. In that sense, flood processes in Switzerland might contrast with extreme drought processes for which longer term precipitation statistics are expected to be important. Hilker, N., A. Badoux, and C. Hegg. 2009. The swiss flood and landslide damage database 1972-2007. Natural Hazards and Earth System Sciences 9, 913-925. Schmocker-Fackel, P., and F. Naef. 2010. More frequent flooding? changes in flood frequency in switzerland since 1850. Journal of hydrology 381, 1-8. 1,3 Köplin, N., Schädler, B., Viviroli, D. and Weingartner, R. 2013. Seasonality and magnitude of floods in Switzerland under future climate change. Hydrol. Process.. doi: 10.1002/hyp.9757 MeteoSwiss. 2011. Documentation of meteoswiss grid-data products. daily precipitation (final analysis): Rhiresd. available online at http://www.meteosuisse.admin.ch/web/de/services/datenportal/gitterdaten/precip.html.

  10. Mustiscaling Analysis applied to field Water Content through Distributed Fiber Optic Temperature sensing measurements

    NASA Astrophysics Data System (ADS)

    Benitez Buelga, Javier; Rodriguez-Sinobas, Leonor; Sanchez, Raul; Gil, Maria; Tarquis, Ana M.

    2014-05-01

    Soils can be seen as the result of spatial variation operating over several scales. This observation points to 'variability' as a key soil attribute that should be studied. Soil variability has often been considered to be composed of 'functional' (explained) variations plus random fluctuations or noise. However, the distinction between these two components is scale dependent because increasing the scale of observation almost always reveals structure in the noise. Geostatistical methods and, more recently, multifractal/wavelet techniques have been used to characterize scaling and heterogeneity of soil properties among others coming from complexity science. Multifractal formalism, first proposed by Mandelbrot (1982), is suitable for variables with self-similar distribution on a spatial domain (Kravchenko et al., 2002). Multifractal analysis can provide insight into spatial variability of crop or soil parameters (Vereecken et al., 2007). This technique has been used to characterize the scaling property of a variable measured along a transect as a mass distribution of a statistical measure on a spatial domain of the studied field (Zeleke and Si, 2004). To do this, it divides the transect into a number of self-similar segments. It identifies the differences among the subsets by using a wide range of statistical moments. Wavelets were developed in the 1980s for signal processing, and later introduced to soil science by Lark and Webster (1999). The wavelet transform decomposes a series; whether this be a time series (Whitcher, 1998; Percival and Walden, 2000), or as in our case a series of measurements made along a transect; into components (wavelet coefficients) which describe local variation in the series at different scale (or frequency) intervals, giving up only some resolution in space (Lark et al., 2003, 2004). Wavelet coefficients can be used to estimate scale specific components of variation and correlation. This allows us to see which scales contribute most to signal variation, or to see at which scales signals are most correlated. This can give us an insight into the dominant processes An alternative to both of the above methods has been described recently. Relative entropy and increments in relative entropy has been applied in soil images (Bird et al., 2006) and in soil transect data (Tarquis et al., 2008) to study scale effects localized in scale and provide the information that is complementary to the information about scale dependencies found across a range of scales. We will use them in this work to describe the spatial scaling properties of a set of field water content data measured in an extension of a corn field, in a plot of 500 m2 and an spatial resolution of 25 cm. These measurements are based on an optics cable (BruggSteal) buried on a ziz-zag deployment at 30cm depth. References Bird, N., M.C. Díaz, A. Saa, and A.M. Tarquis. 2006. A review of fractal and multifractal analysis of soil pore-scale images. J. Hydrol. 322:211-219. Kravchenko, A.N., R. Omonode, G.A. Bollero, and D.G. Bullock. 2002. Quantitative mapping of soil drainage classes using topographical data and soil electrical conductivity. Soil Sci. Soc. Am. J. 66:235-243. Lark, R.M., A.E. Milne, T.M. Addiscott, K.W.T. Goulding, C.P. Webster, and S. O'Flaherty. 2004. Scale- and location-dependent correlation of nitrous oxide emissions with soil properties: An analysis using wavelets. Eur. J. Soil Sci. 55:611-627. Lark, R.M., S.R. Kaffka, and D.L. Corwin. 2003. Multiresolution analysis of data on electrical conductivity of soil using wavelets. J. Hydrol. 272:276-290. Lark, R. M. and Webster, R. 1999. Analysis and elucidation of soil variation using wavelets. European J. of Soil Science, 50(2): 185-206. Mandelbrot, B.B. 1982. The fractal geometry of nature. W.H. Freeman, New York. Percival, D.B., and A.T. Walden. 2000. Wavelet methods for time series analysis. Cambridge Univ. Press, Cambridge, UK. Tarquis, A.M., N.R. Bird, A.P. Whitmore, M.C. Cartagena, and Y. Pachepsky. 2008. Multiscale analysis of soil transect data. Vadose Zone J. 7: 563-569. Vereecken, H., R. Kasteel, J. Vanderborght, and T. Harter. 2007. Upscaling hydraulic properties and soil water flow processes in heterogeneous soils: A review. Vadose Zone J. 6:1-28. Whitcher, B.J. 1998. Assessing nonstationary time series using wavelets. Ph.D. diss. Univ. of Washington, Seattle (Diss. Abstr. 9907961). Zeleke, T.B., and B.C. Si. 2004. Scaling properties of topographic indices and crop yield: Multifractal and joint multifractal approaches. Agron J., 96:1082-1090.

  11. Improving a stage forecasting Muskingum model by relating local stage and remote discharge

    NASA Astrophysics Data System (ADS)

    Barbetta, S.; Moramarco, T.; Melone, F.; Brocca, L.

    2009-04-01

    Following the parsimonious concept of parameters, simplified models for flood forecasting based only on flood routing have been developed for flood-prone sites located downstream of a gauged station and at a distance allowing an appropriate forecasting lead-time. In this context, the Muskingum model can be a useful tool. However, critical points in hydrological routing are the representation of lateral inflows contribution and the knowledge of stage-discharge relationships. As regards the former, O'Donnell (O'Donnell, T., 1985. A direct three-parameter Muskingum procedure incorporating lateral inflow, Hydrol. Sci. J., 30[4/12], 479-496) proposed a three-parameter Muskingum procedure assuming the lateral inflows proportional to the contribution entering upstream. Using this approach, Franchini and Lamberti (Franchini, M. & Lamberti, P., 1994. A flood routing Muskingum type simulation and forecasting model based on level data alone, Water Resour. Res., 30[7], 2183-2196) presented a simple model Muskingum type to provide forecast water levels at the downstream end by selecting a routing time interval and, hence, a forecasting lead-time allowing to express the forecast stage as a function of only observed quantities. Moramarco et al. (Moramarco, T., Barbetta, S., Melone, F. & Singh, V.P., 2006. A real-time stage Muskingum forecasting model for a site without rating curve, Hydrol. Sci. J., 51[1], 66-82) enhanced the modeling scheme incorporating a procedure for adapting the parameter linked to lateral inflows. This last model, called STAFOM (STAge FOrecasting Model), was also extended to a two connected river branches schematization in order to improve significantly the forecasting lead-time. The STAFOM model provided satisfactory results for most of the analysed flood events observed in different river reaches in the Upper-Middle Tiber River basin in Central Italy. However, the analysis highlighted that the stage forecast should be enhanced when sudden modifications occur in the upstream and downstream hydrographs recorded in real-time. Moramarco et al. (Moramarco, T., Barbetta, S., F. Melone, F. & Singh, V.P., 2005. Relating local stage and remote discharge with significant lateral inflow, J. Hydrol. Engng ASCE, 10[1], 58-69) showed that for any flood condition at ends of a river reach, a direct proportionality between the upstream and downstream mean velocity can be found. This insight was the basis for developing the Rating Curve Model (RCM) that allows to also accommodate significant lateral inflow contributions, permitting, without using a flood routing procedure and without the need of a rating curve at a local site, to relate the local hydraulic conditions with those at a remote gauged section. Therefore, to improve the STAFOM performance mainly for highly varying flood conditions, the model has been here modified by coupling it with a procedure based on the RCM approach. Several flood events occurred along different equipped river reaches of the Upper Tiber River basin have been used as case study. Results showed that the new model, named STAFOM-RCM, apart from to improve the stage forecast accuracy in terms of error on peak stage, Nash-Sutcliffe efficiency coefficient and the coefficient of persistence, allowed to use a larger lead time thus avoiding the two-river branches cascade schematization where fluctuations in stage forecasting occur more frequently.

  12. Tower-scale performance of four observation-based evapotranspiration algorithms within the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Michel, Dominik; Miralles, Diego; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Research on climate variations and the development of predictive capabilities largely rely on globally available reference data series of the different components of the energy and water cycles. Several efforts have recently aimed at producing large-scale and long-term reference data sets of these components, e.g. based on in situ observations and remote sensing, in order to allow for diagnostic analyses of the drivers of temporal variations in the climate system. Evapotranspiration (ET) is an essential component of the energy and water cycle, which cannot be monitored directly on a global scale by remote sensing techniques. In recent years, several global multi-year ET data sets have been derived from remote sensing-based estimates, observation-driven land surface model simulations or atmospheric reanalyses. The LandFlux-EVAL initiative presented an ensemble-evaluation of these data sets over the time periods 1989-1995 and 1989-2005 (Mueller et al. 2013). The WACMOS-ET project (http://wacmoset.estellus.eu) started in the year 2012 and constitutes an ESA contribution to the GEWEX initiative LandFlux. It focuses on advancing the development of ET estimates at global, regional and tower scales. WACMOS-ET aims at developing a Reference Input Data Set exploiting European Earth Observations assets and deriving ET estimates produced by a set of four ET algorithms covering the period 2005-2007. The algorithms used are the SEBS (Su et al., 2002), Penman-Monteith from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008) and GLEAM (Miralles et al., 2011). The algorithms are run with Fluxnet tower observations, reanalysis data (ERA-Interim), and satellite forcings. They are cross-compared and validated against in-situ data. In this presentation the performance of the different ET algorithms with respect to different temporal resolutions, hydrological regimes, land cover types (including grassland, cropland, shrubland, vegetation mosaic, savanna, woody savanna, needleleaf forest, deciduous forest and mixed forest) are evaluated at the tower-scale in 24 pre-selected study regions on three continents (Europe, North America, and Australia). References: Fisher, J. B., Tu, K.P., and Baldocchi, D.D. Global estimates of the land-atmosphere water flux based on monthly AVHRR and ISLSCP-II data, validated at 16 FLUXNET sites, Remote Sens. Environ. 112, 901-919, 2008. Jiménez, C. et al. Global intercomparison of 12 land surface heat flux estimates. J. Geophys. Res. 116, D02102, 2011. 
 Miralles, D.G. et al. Global land-surface evaporation estimated from satellite-based observations. Hydrol. Earth Syst. Sci. 15, 453-469, 2011. 
 Mu, Q., Zhao, M. & Running, S.W. Improvements to a MODIS global terrestrial evapotranspiration algorithm. Remote Sens. Environ. 115, 1781-1800, 2011. 
 Mueller, B., Hirschi, M., Jimenez, C., Ciais, P., Dirmeyer, P. A., Dolman, A. J., Fisher, J. B., Jung, M., Ludwig, F., Maignan, F., Miralles, D. G., McCabe, M. F., Reichstein, M., Sheffield, J., Wang, K., Wood, E. F., Zhang, Y., and Seneviratne, S. I. (2013). Benchmark products for land evapotranspiration: LandFlux-EVAL multi-data set synthesis. Hydrology and Earth System Sciences, 17, 3707-3720. Mueller, B. et al. Benchmark products for land evapotranspiration: LandFlux-EVAL multi-dataset synthesis. Hydrol. Earth Syst. Sci. 17, 3707-3720, 2013. Su, Z. The Surface Energy Balance System (SEBS) for estimation of turbulent heat fluxes. Hydrol. Earth Syst. Sci. 6, 85-99, 2002.

  13. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal scales for the 2005-2007 reference period will be disclosed. The skill of these algorithms to close the water balance over the continents will be assessed by comparisons to runoff data. The consistency in forcing data will allow to (a) evaluate the skill of these five algorithms in producing ET over particular ecosystems, (b) facilitate the attribution of the observed differences to either algorithms or driving data, and (c) set up a solid scientific basis for the development of global long-term benchmark ET products. Project progress can be followed on our website http://wacmoset.estellus.eu. REFERENCES Fisher, J. B., Tu, K.P., and Baldocchi, D.D. Global estimates of the land-atmosphere water flux based on monthly AVHRR and ISLSCP-II data, validated at 16 FLUXNET sites. Remote Sens. Environ. 112, 901-919, 2008. Jiménez, C. et al. Global intercomparison of 12 land surface heat flux estimates. J. Geophys. Res. 116, D02102, 2011. Jung, M. et al. Recent decline in the global land evapotranspiration trend due to limited moisture supply. Nature 467, 951-954, 2010. Miralles, D.G. et al. Global land-surface evaporation estimated from satellite-based observations. Hydrol. Earth Syst. Sci. 15, 453-469, 2011. Mu, Q., Zhao, M. & Running, S.W. Improvements to a MODIS global terrestrial evapotranspiration algorithm. Remote Sens. Environ. 115, 1781-1800, 2011. Mueller, B. et al. Benchmark products for land evapotranspiration: LandFlux-EVAL multi- dataset synthesis. Hydrol. Earth Syst. Sci. 17, 3707-3720, 2013. Su, Z. The Surface Energy Balance System (SEBS) for estimation of turbulent heat fluxes. Hydrol. Earth Syst. Sci. 6, 85-99, 2002.

  14. State updating of a distributed hydrological model with Ensemble Kalman Filtering: Effects of updating frequency and observation network density on forecast accuracy

    NASA Astrophysics Data System (ADS)

    Rakovec, O.; Weerts, A.; Hazenberg, P.; Torfs, P.; Uijlenhoet, R.

    2012-12-01

    This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model (Rakovec et al., 2012a). The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. The uncertain precipitation model forcings were obtained using a time-dependent multivariate spatial conditional simulation method (Rakovec et al., 2012b), which is further made conditional on preceding simulations. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty. Rakovec, O., Weerts, A. H., Hazenberg, P., Torfs, P. J. J. F., and Uijlenhoet, R.: State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy, Hydrol. Earth Syst. Sci. Discuss., 9, 3961-3999, doi:10.5194/hessd-9-3961-2012, 2012a. Rakovec, O., Hazenberg, P., Torfs, P. J. J. F., Weerts, A. H., and Uijlenhoet, R.: Generating spatial precipitation ensembles: impact of temporal correlation structure, Hydrol. Earth Syst. Sci. Discuss., 9, 3087-3127, doi:10.5194/hessd-9-3087-2012, 2012b.

  15. Recharge and Topographical Controls on Groundwater Circulation in Shallow Crystalline Rock Aquifers revealed by CFC-based Age Data

    NASA Astrophysics Data System (ADS)

    Kolbe, T.; Abbott, B. W.; Marçais, J.; Thomas, Z.; Aquilina, L.; Labasque, T.; Pinay, G.; De Dreuzy, J. R.

    2016-12-01

    Groundwater transit time and flow path are key factors controlling nitrogen retention and removal capacity at the catchment scale (Abbott et al., 2016), but the relative importance of hydrogeological and topographical factors in determining these parameters remains uncertain (Kolbe et al., 2016). To address this unknown, we used numerical modelling techniques calibrated with CFC groundwater age data to quantify transit time and flow path in an unconfined aquifer in Brittany, France. We assessed the relative importance of parameters (aquifer depth, porosity, arrangement of geological layers, and permeability profile), hydrology (recharge rate), and topography in determining characteristic flow distances (Leray et al., 2016). We found that groundwater flow was highly local (mean travel distance of 350 m) but also relatively old (mean CFC age of 40 years). Sensitivity analysis revealed that groundwater travel distances were not sensitive to geological parameters within the constraints of the CFC age data. However, circulation was sensitive to topography in lowland areas where the groundwater table was close to the land surface, and to recharge rate in upland areas where water input modulated the free surface of the aquifer. We quantified these differences with a local groundwater ratio (rGW-LOCAL) defined as the mean groundwater travel distance divided by the equivalent surface distance water would have traveled along the land surface. Lowland rGW-LOCAL was near 1, indicating primarily topographic controls. Upland rGW-LOCALwas 1.6, meaning the groundwater recharge area was substantially larger than the topographically-defined catchment. This ratio was applied to other catchments in Brittany to test its relevance in comparing controls on groundwater circulation within and among catchments. REFERENCES Abbott et al., 2016, Using multi-tracer inference to move beyond single-catchment ecohydrology. Earth-Science Reviews. Kolbe et al., 2016, Coupling 3D groundwater modeling with CFC-based age dating to classify local groundwater circulation in an unconfined crystalline aquifer. J. Hydrol. Leray et al., 2016, Residence time distributions for hydrologic systems: Mechanistic foundations and steady-state analytical solutions. J. Hydrol.

  16. TDR water content inverse profiling in layered soils during infiltration and evaporation

    NASA Astrophysics Data System (ADS)

    Greco, R.; Guida, A.

    2009-04-01

    During the last three decades, time domain reflectometry (TDR) has become one of the most commonly used tools for soil water content measurements either in laboratory or in the field. Indeed, TDR provides easy and cheap water content estimations with relatively small disturbance to the investigated soil. TDR measurements of soil water content are based on the strong correlation between relative dielectric permittivity of wet soil and its volumetric water content. Several expressions of the relationship between relative dielectric permittivity and volumetric water content have been proposed, empirically stated (Topp et al., 1980) as well as based on semi-analytical approach to dielectric mixing models (Roth et al., 1990; Whalley, 1993). So far, TDR field applications suffered the limitation due to the capability of the technique of estimating only the mean water content in the volume investigated by the probe. Whereas the knowledge of non homogeneous vertical water content profiles was needed, it was necessary to install either several vertical probes of different length or several horizontal probes placed in the soil at different depths, in both cases strongly increasing soil disturbance as well as the complexity of the measurements. Several studies have been recently dedicated to the development of inversion methods aimed to extract more information from TDR waveforms, in order to estimate non homogeneous moisture profiles along the axis of the metallic probe used for TDR measurements. A common feature of all these methods is that electromagnetic transient through the wet soil along the probe is mathematically modelled, assuming that the unknown soil water content distribution corresponds to the best agreement between simulated and measured waveforms. In some cases the soil is modelled as a series of small layers with different dielectric properties, and the waveform is obtained as the result of the superposition of multiple reflections arising from impedance discontinuities between the layers (Nguyen et al., 1997; Todoroff et al., 1998; Heimovaara, 2001; Moret et al., 2006). Other methods consider the dielectric properties of the soil as smoothly variable along probe axis (Greco, 1999; Oswald et al., 2003; Greco, 2006). Aim of the study is testing the applicability to layered soils of the inverse method for the estimation of water content profiles along vertical TDR waveguides, originally applied in laboratory to homogeneous soil samples with monotonic moisture distributions (Greco, 2006), and recently extended to field measurements with more general water content profiles (Greco and Guida, 2008). Influence of soil electrical conductivity, uniqueness of solution, choices of parametrization, parameters identifiabilty, sensitivity of the method to chosen parameters variations are discussed. Finally, the results of the application of the inverse method to a series of infiltration and evaporation experiments carried out in a flume filled with three soil layers of different physical characteristics are presented. ACKNOWLEDGEMENTS The research was co-financed by the Italian Ministry of University, by means of the PRIN 2006 PRIN program, within the research project entitled ‘Definition of critical rainfall thresholds for destructive landslides for civil protection purposes'. REFERENCES Greco, R., 1999. Measurement of water content profiles by single TDR experiments. In: Feyen, J., Wiyo, K. (Eds.), Modelling of Transport Processes in Soils. Wageningen Pers, Wageningen, the Netherlands, pp. 276-283. Greco, R., 2006. Soil water content inverse profiling from single TDR waveforms. J. Hydrol. 317, 325-339. Greco R., Guida A., 2008. Field measurements of topsoil moisture profiles by vertical TDR probes. J. Hydrol. 348, 442- 451. Heimovaara, T.J., 2001. Frequency domain modelling of TDR waveforms in order to obtain frequency dependent dielectric properties of soil samples: a theoretical approach. In: TDR 2001 - Second International Symposium on Time Domain Reflectometry for Innovative Geotechnical Applications. Northwestern University, Evanston, Illinois, pp. 19-21. Moret, D., Arrue, J.L., Lopez, M.V., Gracia, R., 2006. A new TDR waveform analysis approach for soil moisture profiling using a single probe. J. Hydrol. 321, 163-172. Nguyen, B.L., Bruining, J., Slob, E.C., 1997. Saturation profiles from dielectric (frequency domain reflectometry) measurements in porous media. In: Proceedings of International Workshop on characterization and Measurements of the Hydraulic Properties of Unsaturated Porous Media, Riverside, California, pp. 363-375. Oswald, B., Benedickter, H.R., Ba¨chtold, W., Flu¨hler, H., 2003. Spatially resolved water content profiles from inverted time domain reflectometry signals. Water Resour. Res. 39 (12), 1357. Todoroff, P., Lorion, R., Lan Sun Luk, J.-D., 1998. L'utilisation des génétiques pour l'identification de profils hydriques de sol a` partir de courbes réflectométriques. CR Acad. Sci. Paris, Sciences de la terre et des plane`tes 327, 607-610. Topp, G.C., Davis, J.L., Annan, A.P., 1980. Electromagnetic determination of soil water content: measurement in coaxial transmission lines. Water Resour. Res. 16, 574-582. Roth, K., Schulin, R., Fluhler, H., Attinger, W., 1990. Calibration of time domain reflectometry for water content measurement using a composite dielectric approach. Water Resour. Res. 26, 2267-2273. Whalley, W.R., 1993. Considerations on the use of time domain reflectometry (TDR) for measuring soil water content. J. Soil Sci. 44, 1-9.

  17. A 1985-2015 data-driven global reconstruction of GRACE total water storage

    NASA Astrophysics Data System (ADS)

    Humphrey, Vincent; Gudmundsson, Lukas; Isabelle Seneviratne, Sonia

    2016-04-01

    After thirteen years of measurements, the Gravity Recovery and Climate Experiment (GRACE) mission has enabled for an unprecedented view on total water storage (TWS) variability. However, the relatively short record length, irregular time steps and multiple data gaps since 2011 still represent important limitations to a wider use of this dataset within the hydrological and climatological community especially for applications such as model evaluation or assimilation of GRACE in land surface models. To address this issue, we make use of the available GRACE record (2002-2015) to infer local statistical relationships between detrended monthly TWS anomalies and the main controlling atmospheric drivers (e.g. daily precipitation and temperature) at 1 degree resolution (Humphrey et al., in revision). Long-term and homogeneous monthly time series of detrended anomalies in total water storage are then reconstructed for the period 1985-2015. The quality of this reconstruction is evaluated in two different ways. First we perform a cross-validation experiment to assess the performance and robustness of the statistical model. Second we compare with independent basin-scale estimates of TWS anomalies derived by means of combined atmospheric and terrestrial water-balance using atmospheric water vapor flux convergence and change in atmospheric water vapor content (Mueller et al. 2011). The reconstructed time series are shown to provide robust data-driven estimates of global variations in water storage over large regions of the world. Example applications are provided for illustration, including an analysis of some selected major drought events which occurred before the GRACE era. References Humphrey V, Gudmundsson L, Seneviratne SI (in revision) Assessing global water storage variability from GRACE: trends, seasonal cycle, sub-seasonal anomalies and extremes. Surv Geophys Mueller B, Hirschi M, Seneviratne SI (2011) New diagnostic estimates of variations in terrestrial water storage based on ERA-Interim data. Hydrol Process 25:996-1008

  18. A latitudinal study of oxygen isotopes within horsehair

    NASA Astrophysics Data System (ADS)

    Thompson, E.; Bronk Ramsey, C.; McConnell, J. R.

    2016-12-01

    This study aims to explore the hypothesis that 'if oxygen isotope ratios deplete with decreasing temperature then a study of oxygen isotope ratios within horsehair from Oxfordshire to Iceland will show a latitudinal depletion gradient'. By looking at oxygen isotope values at different geographical positions, we can track the relationship with latitude and with different regional climate features. This will provide a firmer understanding of how to compare climate records from different locations. Additionally, a comparison of the horse breeds from this study to those analysed within previous studies will create an even better understanding of the intra-species variation within the δ18O values of horsehair. A total of 24 horses were sampled on the 7th March from Thordale Stud in Shetland, the Icelandic Food And Veterinary Authority in Iceland, the Exmoor Pony Centre in Exmoor and the Pigeon House Equestrian Centre in Oxfordshire. By starting the sampling process from the most recent growth at the follicle, the sampling date becomes a chronological marker, temporally fixing the first sample within a sequential set of data points extending for one year or longer, depending on the length of each individual hair. The samples were analysed for oxygen isotope values using an IRMS coupled within a Sercon HTEA. Preliminary results show a latitudinal gradient is evident on comparison between the locations, consistent with the findings of Darling and Talbot's study of fresh water isotopes in the British Isles (2003). These results support the hypothesis, showing that a study of oxygen isotope ratios within horse hair from Oxfordshire to Iceland showing a latitudinal depletion gradient, consistent with a depletion of oxygen isotope ratios due to decreasing temperatures. Darling, W. and Talbot, J. (2003). The O and H stable isotope composition of freshwaters in the British Isles. 1. Rainfall. Hydrol. Earth System Science, 7(2), pp.163-181.

  19. Using random forests to explore the effects of site attributes and soil properties on near-saturated and saturated hydraulic conductivity

    NASA Astrophysics Data System (ADS)

    Jorda, Helena; Koestel, John; Jarvis, Nicholas

    2014-05-01

    Knowledge of the near-saturated and saturated hydraulic conductivity of soil is fundamental for understanding important processes like groundwater contamination risks or runoff and soil erosion. Hydraulic conductivities are however difficult and time-consuming to determine by direct measurements, especially at the field scale or larger. So far, pedotransfer functions do not offer an especially reliable alternative since published approaches exhibit poor prediction performances. In our study we aimed at building pedotransfer functions by growing random forests (a statistical learning approach) on 486 datasets from the meta-database on tension-disk infiltrometer measurements collected from peer-reviewed literature and recently presented by Jarvis et al. (2013, Influence of soil, land use and climatic factors on the hydraulic conductivity of soil. Hydrol. Earth Syst. Sci. 17(12), 5185-5195). When some data from a specific source publication were allowed to enter the training set whereas others were used for validation, the results of a 10-fold cross-validation showed reasonable coefficients of determination of 0.53 for hydraulic conductivity at 10 cm tension, K10, and 0.41 for saturated conductivity, Ks. The estimated average annual temperature and precipitation at the site were the most important predictors for K10, while bulk density and estimated average annual temperature were most important for Ks prediction. The soil organic carbon content and the diameter of the disk infiltrometer were also important for the prediction of both K10 and Ks. However, coefficients of determination were around zero when all datasets of a specific source publication were excluded from the training set and exclusively used for validation. This may indicate experimenter bias, or that better predictors have to be found or that a larger dataset has to be used to infer meaningful pedotransfer functions for saturated and near-saturated hydraulic conductivities. More research is in progress to further elucidate this question.

  20. Searching for the right scale in catchment hydrology: the effect of soil spatial variability in simulated states and fluxes

    NASA Astrophysics Data System (ADS)

    Baroni, Gabriele; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine

    2017-04-01

    The advances in computer science and the availability of new detailed data-sets have led to a growing number of distributed hydrological models applied to finer and finer grid resolutions for larger and larger catchment areas. It was argued, however, that this trend does not necessarily guarantee better understanding of the hydrological processes or it is even not necessary for specific modelling applications. In the present study, this topic is further discussed in relation to the soil spatial heterogeneity and its effect on simulated hydrological state and fluxes. To this end, three methods are developed and used for the characterization of the soil heterogeneity at different spatial scales. The methods are applied at the soil map of the upper Neckar catchment (Germany), as example. The different soil realizations are assessed regarding their impact on simulated state and fluxes using the distributed hydrological model mHM. The results are analysed by aggregating the model outputs at different spatial scales based on the Representative Elementary Scale concept (RES) proposed by Refsgaard et al. (2016). The analysis is further extended in the present study by aggregating the model output also at different temporal scales. The results show that small scale soil variabilities are not relevant when the integrated hydrological responses are considered e.g., simulated streamflow or average soil moisture over sub-catchments. On the contrary, these small scale soil variabilities strongly affect locally simulated states and fluxes i.e., soil moisture and evapotranspiration simulated at the grid resolution. A clear trade-off is also detected by aggregating the model output by spatial and temporal scales. Despite the scale at which the soil variabilities are (or are not) relevant is not universal, the RES concept provides a simple and effective framework to quantify the predictive capability of distributed models and to identify the need for further model improvements e.g., finer resolution input. For this reason, the integration in this analysis of all the relevant input factors (e.g., precipitation, vegetation, geology) could provide a strong support for the definition of the right scale for each specific model application. In this context, however, the main challenge for a proper model assessment will be the correct characterization of the spatio- temporal variability of each input factor. Refsgaard, J.C., Højberg, A.L., He, X., Hansen, A.L., Rasmussen, S.H., Stisen, S., 2016. Where are the limits of model predictive capabilities?: Representative Elementary Scale - RES. Hydrol. Process. doi:10.1002/hyp.11029

  1. Comparison of the performance and reliability of 18 lumped hydrological models driven by ECMWF rainfall ensemble forecasts: a case study on 29 French catchments

    NASA Astrophysics Data System (ADS)

    Velázquez, Juan Alberto; Anctil, François; Ramos, Maria-Helena; Perrin, Charles

    2010-05-01

    An ensemble forecasting system seeks to assess and to communicate the uncertainty of hydrological predictions by proposing, at each time step, an ensemble of forecasts from which one can estimate the probability distribution of the predictant (the probabilistic forecast), in contrast with a single estimate of the flow, for which no distribution is obtainable (the deterministic forecast). In the past years, efforts towards the development of probabilistic hydrological prediction systems were made with the adoption of ensembles of numerical weather predictions (NWPs). The additional information provided by the different available Ensemble Prediction Systems (EPS) was evaluated in a hydrological context on various case studies (see the review by Cloke and Pappenberger, 2009). For example, the European ECMWF-EPS was explored in case studies by Roulin et al. (2005), Bartholmes et al. (2005), Jaun et al. (2008), and Renner et al. (2009). The Canadian EC-EPS was also evaluated by Velázquez et al. (2009). Most of these case studies investigate the ensemble predictions of a given hydrological model, set up over a limited number of catchments. Uncertainty from weather predictions is assessed through the use of meteorological ensembles. However, uncertainty from the tested hydrological model and statistical robustness of the forecasting system when coping with different hydro-meteorological conditions are less frequently evaluated. The aim of this study is to evaluate and compare the performance and the reliability of 18 lumped hydrological models applied to a large number of catchments in an operational ensemble forecasting context. Some of these models were evaluated in a previous study (Perrin et al. 2001) for their ability to simulate streamflow. Results demonstrated that very simple models can achieve a level of performance almost as high (sometimes higher) as models with more parameters. In the present study, we focus on the ability of the hydrological models to provide reliable probabilistic forecasts of streamflow, based on ensemble weather predictions. The models were therefore adapted to run in a forecasting mode, i.e., to update initial conditions according to the last observed discharge at the time of the forecast, and to cope with ensemble weather scenarios. All models are lumped, i.e., the hydrological behavior is integrated over the spatial scale of the catchment, and run at daily time steps. The complexity of tested models varies between 3 and 13 parameters. The models are tested on 29 French catchments. Daily streamflow time series extend over 17 months, from March 2005 to July 2006. Catchment areas range between 1470 km2 and 9390 km2, and represent a variety of hydrological and meteorological conditions. The 12 UTC 10-day ECMWF rainfall ensemble (51 members) was used, which led to daily streamflow forecasts for a 9-day lead time. In order to assess the performance and reliability of the hydrological ensemble predictions, we computed the Continuous Ranked probability Score (CRPS) (Matheson and Winkler, 1976), as well as the reliability diagram (e.g. Wilks, 1995) and the rank histogram (Talagrand et al., 1999). Since the ECMWF deterministic forecasts are also available, the performance of the hydrological forecasting systems was also evaluated by comparing the deterministic score (MAE) with the probabilistic score (CRPS). The results obtained for the 18 hydrological models and the 29 studied catchments are discussed in the perspective of improving the operational use of ensemble forecasting in hydrology. References Bartholmes, J. and Todini, E.: Coupling meteorological and hydrological models for flood forecasting, Hydrol. Earth Syst. Sci., 9, 333-346, 2005. Cloke, H. and Pappenberger, F.: Ensemble Flood Forecasting: A Review. Journal of Hydrology 375 (3-4): 613-626, 2009. Jaun, S., Ahrens, B., Walser, A., Ewen, T., and Schär, C.: A probabilistic view on the August 2005 floods in the upper Rhine catchment, Nat. Hazards Earth Syst. Sci., 8, 281-291, 2008. Matheson, J. E. and Winkler, R. L.: Scoring rules for continuous probability distributions, Manage Sci., 22, 1087-1096, 1976. Perrin, C., Michel C. and Andréassian,V. Does a large number of parameters enhance model performance? Comparative assessment of common catchment model structures on 429 catchments, J. Hydrol., 242, 275-301, 2001. Renner, M., Werner, M. G. F., Rademacher, S., and Sprokkereef, E.: Verification of ensemble flow forecast for the River Rhine, J. Hydrol., 376, 463-475, 2009. Roulin, E. and Vannitsem, S.: Skill of medium-range hydrological ensemble predictions, J. Hydrometeorol., 6, 729-744, 2005. Talagrand, O., Vautard, R., and Strauss, B.: Evaluation of the probabilistic prediction systems, in: Proceedings, ECMWF Workshop on Predictability, Shinfield Park, Reading, Berkshire, ECMWF, 1-25, 1999. Velázquez, J.A., Petit, T., Lavoie, A., Boucher M.-A., Turcotte R., Fortin V., and Anctil, F. : An evaluation of the Canadian global meteorological ensemble prediction system for short-term hydrological forecasting, Hydrol. Earth Syst. Sci., 13, 2221-2231, 2009. Wilks, D. S.: Statistical Methods in the Atmospheric Sciences, Academic Press, San Diego, CA, 465 pp., 1995.

  2. A room temperature operating cryogenic cell for in vivo monitoring of dry snow metamorphism by X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Calonne, N.; Flin, F.; Lesaffre, B.; Dufour, A.; Roulle, J.; Puglièse, P.; Philip, A.; Lahoucine, F.; Rolland du Roscoat, S.; Geindreau, C.

    2013-12-01

    Three-dimensional (3D) images of snow offer the possibility of studying snow metamorphism at the grain scale by analysing the time evolution of its complex microstructure. Such images are also particularly useful for providing physical effective properties of snow arising in macroscopic models. In the last 15 years, several experiments have been developed in order to get 3D images of snow by X-ray microtomography. Up to now, two different approaches have been used: a static and an in vivo approach. The static method consists in imaging a snow sample whose structural evolution has been stopped by impregnation and/or very cold temperature conditions. The sample is placed in a cryogenic cell that can operate at the ambient temperature of the tomograph room (e.g. Brzoska et al., 1999, Coléou et al., 2001). The in vivo technique uses a non impregnated sample which continues to undergo structural evolutions and is put in a cell that controls the temperature conditions at the boundaries of the sample. This kind of cell requires a cold environnement and the whole tomographic acquisition process takes place in a cold room (e.g. Schneebeli and Sokratov, 2004, Pinzer and Schneebeli, 2009). The 2nd approach has the major advantage to provide the time evolution of the microstructure of a same snow sample but requires a dedicated cold-room tomographic scanner, whereas the static method can be used with any tomographic scanner operating at ambient conditions. We developed a new in vivo cryogenic cell which benefits from the advantages of each of the above methods: it (1) allows to follow the evolution of the same sample with time and (2) is usable with a wide panel of tomographic scanners provided with large cabin sizes, which has many advantages in terms of speed, resolution, and availability of new technologies. The thermal insulation between the snow sample and the outside is ensured by a double wall vacuum system of thermal conductivity of about 0.0015 Wm-1K-1. An air pumping system is thus permanently active during the experiment. Two Peltier cells are used to regulate the temperature at the top and bottom of the snow sample, allowing to impose the conditions of metamorphism (isothermal, temperature gradient). The snow sample consists of a cylinder of 1 cm radius and 1 cm height. During its positioning into the cryogenic cell, it is protected from the room conditions by a sealed and cold copper sample holder. The whole apparatus (cell, pumping system) is able to rotate of 360° synchronously during the tomographic acquisition. After X-ray tomography and image processing, this cell provides a set of 3D images showing the time evolution of the microstructure of a snow sample during its metamorphism under well-defined imposed conditions. Preliminary results give promising outlooks for the study of snow and firn physical processes. Brzoska, J.-B. and 7 others. 1999. ESRF Newsletter, 32, 22-23. Coléou, C., B. Lesaffre, J.-B. Brzoska, W. Ludwig and E. Boller. 2001. Ann. Glaciol., 32, 75-81. Pinzer, B. and M. Schneebeli. 2009. Meas. Sci. Technol., 20, 095705. Schneebeli, M. and S. A. Sokratov. 2004. Hydrol. Process., 18, 3655 - 3665.

  3. [Cloning of new acylamidase gene from Rhodococcus erythropolis and its expression in Escherichia coli].

    PubMed

    Lavrov, K V; Ianenko, A S

    2013-10-01

    The gene for new Rhodococcus erythropolis TA37 acylamidase, which possesses unique substrate specificity, has been cloned and expressed in E. coli. Substrates for this enzyme are not only simple amides, such as acetamide and propionamide, but also N-substituted amides, such as 4'-nitroacetanilide. The 1431-bp gene was expressed in E. coli BL21 (DE3) cells on pET16b plasmid under the control of a promoter of the φ 10 gene from the T7 phage. The molecular mass of recombinant acylamidase in E. coli was 55 kDa, which corresponded to that of native acylamidase from Rhodococcus erythropolis TA37. Recombinant acylamidase was able to hydrolize N-substituted amides. A search of a nucleotide database and multiple alignment revealed that acylamidase belonged to the Amidase protein family PF01425, but its nucleotide and amino acid sequences differed significantly from those of the described amidases.

  4. Geomorphic Flood Area (GFA): a DEM-based tool for flood susceptibility mapping at large scales

    NASA Astrophysics Data System (ADS)

    Manfreda, S.; Samela, C.; Albano, R.; Sole, A.

    2017-12-01

    Flood hazard and risk mapping over large areas is a critical issue. Recently, many researchers are trying to achieve a global scale mapping encountering several difficulties, above all the lack of data and implementation costs. In data scarce environments, a preliminary and cost-effective floodplain delineation can be performed using geomorphic methods (e.g., Manfreda et al., 2014). We carried out several years of research on this topic, proposing a morphologic descriptor named Geomorphic Flood Index (GFI) (Samela et al., 2017) and developing a Digital Elevation Model (DEM)-based procedure able to identify flood susceptible areas. The procedure exhibited high accuracy in several test sites in Europe, United States and Africa (Manfreda et al., 2015; Samela et al., 2016, 2017) and has been recently implemented in a QGIS plugin named Geomorphic Flood Area (GFA) - tool. The tool allows to automatically compute the GFI, and turn it into a linear binary classifier capable of detecting flood-prone areas. To train this classifier, an inundation map derived using hydraulic models for a small portion of the basin is required (the minimum is 2% of the river basin's area). In this way, the GFA-tool allows to extend the classification of the flood-prone areas across the entire basin. We are also defining a simplified procedure for the estimation of the river depth, which may be helpful for large-scale analyses to approximatively evaluate the expected flood damages in the surrounding areas. ReferencesManfreda, S., Nardi, F., Samela, C., Grimaldi, S., Taramasso, A. C., Roth, G., & Sole, A. (2014). Investigation on the use of geomorphic approaches for the delineation of flood prone areas. J. Hydrol., 517, 863-876. Manfreda, S., Samela, C., Gioia, A., Consoli, G., Iacobellis, V., Giuzio, L., & Sole, A. (2016). Flood-prone areas assessment using linear binary classifiers based on flood maps obtained from 1D and 2D hydraulic models. Nat. Hazards, Vol. 79 (2), pp 735-754. Samela, C., Manfreda, S., Paola, F. D., Giugni, M., Sole, A., & Fiorentino, M. (2016). DEM-Based Approaches for the Delineation of Flood-Prone Areas in an Ungauged Basin in Africa. J. Hydrol. Eng,, 06015010. Samela, C., Troy, T. J., & Manfreda, S. (2017a). Geomorphic classifiers for flood-prone areas delineation for data-scarce environments. Adv. Water Resour., 102, 13-28.

  5. Contemporary suspended sediment yield of a partly glaciated catchment, Riffler Bach (Tyrol, Austria)

    NASA Astrophysics Data System (ADS)

    Weber, Martin; Baewert, Henning; Morche, David

    2015-04-01

    Due to glacier retreat since the LIA (Little Ice Age) proglacial areas in high mountain landscapes are growing. These systems are characterized by a high geomorphological activity, especially in the fluvial subsystem. Despite the long tradition of geomorphological research in the European Alps there is a still a lack of understanding in the interactions between hydrology, sediment sources, sediments sinks and suspended sediment transport. As emphasized by ORWIN ET AL. (2010) those problems can be solved by gathering data in a higher frequency and/or in a higher spatial resolution or density - both leading to a big amount of data. In 2012 a gauging station was installed at the outlet of the partly glaciated catchment of the Riffler Bach (Kaunertal valley, Tyrol). During the ablation seasons in 2012 and 2013 water stage was logged automatically every 15 minutes. In both seasons discharge was measured at different water levels to calculate a stage-discharge relation. Additionally, water samples were taken by an automatic water sampler. Within 16 sampling cycles with sampling frequencies ranging from 1 to 24 hours 389 water samples have been collected. The samples were filtered to calculate the suspended sediment concentration (SSC) of each sample. Furthermore, the climate station Weißsee provided meteorological data at a 15 minute interval. Due to the high variability in suspended sediment transport in proglacial rivers it is impossible to compute a robust annual Q-SSC-relation. Hence, two other approaches were used to calculate the suspended sediment load (SSL) and the suspended sediment yield (SSY): A) Q-SSC-relations for every single sampling cycle (e.g. GEILHAUSEN ET AL. 2013) B) Q-SSC-relations based on classification of dominant runoff-generating processes (e.g. ORWIN AND SMART 2004). The first approach uses commonly operated analysis methods that are well understood. While the hydro-climatic approach is more feasible to explain discharge generation and to locate sediment sources both approaches underline the fact that SSC does not always depends on discharge but also on sediment availability. The comparison of both approaches shows that in well investigated areas the results are strongly determined by the choice of the analysis method. References Geilhausen, M., Morche, D., Otto, J.-C. and Schrott, L. (2013): Sediment discharge from the proglacial zone of a retreating Alpine glacier. Z Geomorphol Supplementary Issue 57 (2), 29-53. DOI: 10.1127/0372-8854/2012/S-00122 Orwin, J., Lamoureux, S.F., Warburton, J. and Beylich, A., (2010): A framework for characterizing fluvial sediment fluxes from source to sink in cold environments. Geogr. Ann. 92 A (2): 155-176. Orwin, J.F. and Smart, C.C. (2004): Short-term spatial and temporal patterns of suspended sediment transfer in proglacial channels, Small River Glacier, Canada. Hydrol. Process. 18, 1521-1542.

  6. On the assimilation of SWOT type data into 2D shallow-water models

    NASA Astrophysics Data System (ADS)

    Frédéric, Couderc; Denis, Dartus; Pierre-André, Garambois; Ronan, Madec; Jérôme, Monnier; Jean-Paul, Villa

    2013-04-01

    In river hydraulics, assimilation of water level measurements at gauging stations is well controlled, while assimilation of images is still delicate. In the present talk, we address the richness of satellite mapped information to constrain a 2D shallow-water model, but also related difficulties. 2D shallow models may be necessary for small scale modelling in particular for low-water and flood plain flows. Since in both cases, the dynamics of the wet-dry front is essential, one has to elaborate robust and accurate solvers. In this contribution we introduce robust second order, stable finite volume scheme [CoMaMoViDaLa]. Comparisons of real like tests cases with more classical solvers highlight the importance of an accurate flood plain modelling. A preliminary inverse study is presented in a flood plain flow case, [LaMo] [HoLaMoPu]. As a first step, a 0th order data processing model improves observation operator and produces more reliable water level derived from rough measurements [PuRa]. Then, both model and flow behaviours can be better understood thanks to variational sensitivities based on a gradient computation and adjoint equations. It can reveal several difficulties that a model designer has to tackle. Next, a 4D-Var data assimilation algorithm used with spatialized data leads to improved model calibration and potentially leads to identify river discharges. All the algorithms are implemented into DassFlow software (Fortran, MPI, adjoint) [Da]. All these results and experiments (accurate wet-dry front dynamics, sensitivities analysis, identification of discharges and calibration of model) are currently performed in view to use data from the future SWOT mission. [CoMaMoViDaLa] F. Couderc, R. Madec, J. Monnier, J.-P. Vila, D. Dartus, K. Larnier. "Sensitivity analysis and variational data assimilation for geophysical shallow water flows". Submitted. [Da] DassFlow - Data Assimilation for Free Surface Flows. Computational software http://www-gmm.insa-toulouse.fr/~monnier/DassFlow/ [HoLaMoPu] R. Hostache, X. Lai, J. Monnier, C. Puech. "Assimilation of spatial distributed water levels into a shallow-water flood model. Part II: using a remote sensing image of Mosel river". J. Hydrology (2010). [LaMo] X. Lai, J. Monnier. "Assimilation of spatial distributed water levels into a shallow-water flood model. Part I: mathematical method and test case". J. Hydrology (2009). [PuRa] C. Puech, D. Raclot. "Using geographic information systems and aerial photographs to determine water levels during floods". Hydrol. Process., 16, 1593 - 1602, (2002). [RoDa] H. Roux, D. Dartus. "Use of Parameter Optimization to Estimate a Flood Wave: Potential Applications to Remote Sensing of Rivers". J. Hydrology (2006).

  7. Global, continental and regional water balance estimates from HYPE catchment modelling

    NASA Astrophysics Data System (ADS)

    Arheimer, Berit; Andersson, Jafet; Crochemore, Louise; Donnelly, Chantal; Gustafsson, David; Hasan, Abdoulghani; Isberg, Kristina; Pechlivanidis, Ilias; Pimentel, Rafael; Pineda, Luis

    2017-04-01

    In the past, catchment modelling mainly focused on simulating the lumped hydrological cycle at local to regional domains with high precision in a specific point of a river. Today, the level of maturity in hydrological process descriptions, input data and methods for parameter constraints makes it possible to apply these models also for multi-basins over large domains, still using the catchment modellers approach with high demands on agreement with observed data. HYPE is a process-oriented, semi-distributed and open-source model concept that is developed and used operationally in Sweden since a decade. Its finest calculation unit is hydrological response units (HRUs) in a catchment and these are assumed to give the same rainfall-runoff response. HRUs are normally made up of similar land cover and management, combined with soil type or elevation. Water divides are retrieved from topography and calculations are integrated for catchments, which can be of different spatial resolution and are coupled along the river network. In each catchment, HYPE calculates the water balance of a given time-step separately for various hydrological storages, such glaciers, active soil, groundwater, river channels, wetlands, floodplains, and lakes. The model is calibrated in a step-wise manner (following the water path-ways) against various sources additional data sources, including in-situ observations, Earth Observation products, soft information and expert judgements (Arheimer et al., 2012; Donnelly et al, 2016; Pechlivanidis and Arheimer 2015). Both the HYPE code and the model set-ups (i.e. input data and parameter values) are frequently released in new versions as they are continuously improved and updated. This presentation will show the results of aggregated water-balance components over large domains, such as the Arctic basin, the European continent, the Indian subcontinent and the Niger River basin. These can easily be compared to results from other kind of large-scale modelling approaches. The presentation will also show model performance vs observed data from river gauges and other data sources at local and regional scale. Finally, the results will be compared to a first model run of a world-wide HYPE covering all earth surfaces except from the Antarctic. The World-Wide HYPE has a resolution for calculation and evaluation of on average <1000 km2. References: Arheimer, B. et al. 2012. Water and nutrient simulations using the HYPE …. Hydrology research 43(4):315-329. Donnelly, C et al., 2016. Using flow signatures ….. Hydr. Sciences Journal 61(2):255-273, doi: 10.1080/02626667.2015.1027710 Pechlivanidis, I. G. and Arheimer, B. 2015. Large-scale hydrological modelling …, Hydrol. Earth Syst. Sci., 19, 4559-4579, doi:10.5194/hess-19-4559-2015

  8. Looking for Similarities Between Lowland (Flash) Floods

    NASA Astrophysics Data System (ADS)

    Brauer, C.; Teuling, R.; Torfs, P.; Hobbelt, L.; Jansen, F.; Melsen, L.; Uijlenhoet, R.

    2012-12-01

    On 26 August 2010 the eastern part of The Netherlands and the bordering part of Germany were struck by a series of rainfall events. Over an area of 740 km2 more than 120 mm of rainfall were observed in 24 h. We investigated the unprecedented flash flood triggered by this exceptionally heavy rainfall event (return period > 1000 years) in the 6.5 km2 Hupsel Brook catchment, which has been the experimental watershed employed by Wageningen University since the 1960s. This study improved our understanding of the dynamics of such lowland flash floods (Brauer et al., 2011). These observations, however, only show how our experimental catchment behaved and the results cannot be extrapolated directly to different floods in other (neighboring) lowland catchments. Therefore, it is necessary to use the information collected in one well-monitored catchment in combination with data from other, less well monitored catchments to find common signatures which could describe the runoff response during a lowland flood as a function of catchment characteristics. Because of the large spatial extent of the rainfall event in August 2010, many brooks and rivers in the Netherlands and Germany flooded. With data from several catchments we investigated the influence of rainfall and catchment characteristics (such as slope, size and land use) on the reaction of discharge to rainfall. We also investigated the runoff response in these catchments during previous floods by analyzing the relation between storage and discharge and the recession curve. In addition to the flood in August 2010, two other floods occurred in The Netherlands in recently. The three floods occurred in different parts of the country, after different types of rainfall events and with different initial conditions. We selected several catchments during each flood to compare their response and find out if these cases are fundamentally different or that they were produced by the same underlying processes and can be treated in a similar manner. Brauer, C. C., Teuling, A.J., Overeem, A., van der Velde, Y., Hazenberg, P., Warmerdam, P. M. M. and Uijlenhoet, R.: Anatomy of extraordinary rainfall and flash flood in a Dutch lowland catchment, Hydrol. Earth Syst. Sci., 15, 1991-2005, 2011.

  9. Investigation of Water Dynamics and the Effect of Evapotranspiration on Grain Yield of Rainfed Wheat and Barley under a Mediterranean Environment: A Modelling Approach

    PubMed Central

    Zhang, Kefeng; Bosch-Serra, Angela D.; Boixadera, Jaume; Thompson, Andrew J.

    2015-01-01

    Agro-hydrological models have increasingly become useful and powerful tools in optimizing water and fertilizer application, and in studying the environmental consequences. Accurate prediction of water dynamics in such models is essential for models to produce reasonable results. In this study, detailed simulations were performed for water dynamics of rainfed winter wheat and barley grown under a Mediterranean climate over a 10-year period. The model employed (Yang et al., 2009. J. Hydrol., 370, 177-190) uses easily available agronomic data, and takes into consideration of all key soil and plant processes in controlling water dynamics in the soil-crop system, including the dynamics of root growth. The water requirement for crop growth was calculated according to the FAO56, and the soil hydraulic properties were estimated using peto-transfer functions (PTFs) based on soil physical properties and soil organic matter content. Results show that the simulated values of soil water content at the depths of 15, 45 and 75 cm agreed with the measurements well with the root of the mean squared errors of 0.027 cm3 cm-3 and the model agreement index of 0.875. The simulated seasonal evapotranspiration (ET) ranged from 208 to 388 mm, and grain yield was found to correlate with the simulated seasonal ET in a linear manner within the studied ET range. The simulated rates of grain yield increase were 17.3 and 23.7 kg ha-l for every mm of water evapotranspired for wheat and barley, respectively. The good agreement of soil water content between measurement and simulation and the simulated relationships between grain yield and seasonal ET supported by the data in the literature indicates that the model performed well in modelling water dynamics for the studied soil-crop system, and therefore has the potential to be applied reliably and widely in precision agriculture. Finally, a two-staged approach using inverse modelling techniques to further improve model performance was discussed. PMID:26098946

  10. Investigation of Water Dynamics and the Effect of Evapotranspiration on Grain Yield of Rainfed Wheat and Barley under a Mediterranean Environment: A Modelling Approach.

    PubMed

    Zhang, Kefeng; Bosch-Serra, Angela D; Boixadera, Jaume; Thompson, Andrew J

    2015-01-01

    Agro-hydrological models have increasingly become useful and powerful tools in optimizing water and fertilizer application, and in studying the environmental consequences. Accurate prediction of water dynamics in such models is essential for models to produce reasonable results. In this study, detailed simulations were performed for water dynamics of rainfed winter wheat and barley grown under a Mediterranean climate over a 10-year period. The model employed (Yang et al., 2009. J. Hydrol., 370, 177-190) uses easily available agronomic data, and takes into consideration of all key soil and plant processes in controlling water dynamics in the soil-crop system, including the dynamics of root growth. The water requirement for crop growth was calculated according to the FAO56, and the soil hydraulic properties were estimated using peto-transfer functions (PTFs) based on soil physical properties and soil organic matter content. Results show that the simulated values of soil water content at the depths of 15, 45 and 75 cm agreed with the measurements well with the root of the mean squared errors of 0.027 cm(3) cm(-3) and the model agreement index of 0.875. The simulated seasonal evapotranspiration (ET) ranged from 208 to 388 mm, and grain yield was found to correlate with the simulated seasonal ET in a linear manner within the studied ET range. The simulated rates of grain yield increase were 17.3 and 23.7 kg ha(-l) for every mm of water evapotranspired for wheat and barley, respectively. The good agreement of soil water content between measurement and simulation and the simulated relationships between grain yield and seasonal ET supported by the data in the literature indicates that the model performed well in modelling water dynamics for the studied soil-crop system, and therefore has the potential to be applied reliably and widely in precision agriculture. Finally, a two-staged approach using inverse modelling techniques to further improve model performance was discussed.

  11. Estimation of snow line elevation changes from MODIS snow cover data

    NASA Astrophysics Data System (ADS)

    Parajka, Juraj; Bezak, Nejc; Burkhart, John; Holko, Ladislav; Hundecha, Yeshewa; Krajči, Pavel; Mangini, Walter; Molnar, Peter; Sensoy, Aynur; Riboust, Phillippe; Rizzi, Jonathan; Thirel, Guillaume; Arheimer, Berit

    2017-04-01

    This contribution evaluates changes in snowline elevation during snowmelt runoff events in selected basins from Austria, France, Norway, Slovakia, Slovenia, Sweden, Switzerland and Turkey. The main objectives are to investigate the spatial and temporal differences in regional snowline elevation (RSLE) across Europe and to discuss the factors which control its change. The analysis is performed in two steps. In the first, the regional snowline elevation is processed from daily MODIS snow cover data (MOD10A1) by using the methodology of Krajčí et al., (2014). In the second step, the changes in RSLE are analysed for selected flood events in the period 2000-2015. The snowmelt runoff events are extracted from Catalogue of identified flood peaks from GRDC dataset (FLOOD TYPE experiment) available at http://www.water-switch-on.eu/sip-webclient/byod/#/resource/12056. The results will be discussed in terms of: (a) availability and potential of MODIS snow cover data for identifying RSLE changes during snowmelt runoff events, (b) spatial and temporal patterns of RSLE changes across Europe and (c) factor controlling the RSLE change. The analysis is performed as an experiment in Virtual Water Science Laboratory of SWITCH-ON Project (http://www.water-switch-on.eu/). All data, tools and results of the analysis will be open and accessible through the Spatial Information Platform of the Project (http://www.water-switch-on.eu/sip-webclient/byod/). We believe that such strategy will allow to improve and forward comparative research and cooperation between different partners in hydrology (Ceola et al., 2015). References Ceola, S., Arheimer, B., Baratti, E., Blöschl, G., Capell, R., Castellarin, A., Freer, J., Han, D., Hrachowitz, M., Hundecha, Y., Hutton, C., Lindström, G., Montanari, A., Nijzink, R., Parajka, J., Toth, E., Viglione, A., and Wagener, T.: Virtual laboratories: new opportunities for collaborative water science, Hydrol. Earth Syst. Sci., 19, 2101-2117, doi:10.5194/hess-19-2101-2015, 2015. Krajčí, P., Holko, L., Perdigão, R.A.P., Parajka, J., Estimation of regional snowline elevation (RSLE) from MODIS images for seasonally snow covered mountain basins,2014,Journal of Hydrology,519,1769-1778

  12. Monthly water balance model for climate change analysis in agriculture with R

    NASA Astrophysics Data System (ADS)

    Kalicz, Péter; Herceg, András; Gribovszki, Zoltán

    2015-04-01

    For Hungary regional climate models projections suggest a warmer climate and some changes in annual precipitation distribution. These changes force the whole agrarian sector to consider the traditional cropping technologies. This situation is more serious in forestry because some forest populations are on their xeric distributional limits (Gálos et. al, 2014). Additionally, a decision has an impact sometimes longer than one hundred years. To support the stakeholder there is a project which develops a GIS (Geographic Information System) based decision support system. Hydrology plays significant role in this system because water is often one of the most important limiting factor in Hungary. A modified Thorntwaite-type monthly water balance model was choosen to produce hydrological estimations for the GIS modules. This model is calibrated with the available data between 2000 and 2008. Beside other meteorological data we used mainly an actual evapotranspiration map in the calibration phase, which was derived with the Complementary-relationship-based evapotranspiration mapping (CREMAP; Szilágyi and Kovács, 2011) technique. The calibration process is pixel based and it has several stochastic steps. We try to find a flexible solution for the model implementation which easy to automatize and can be integrate in GIS systems. The open source R programming language was selected which well satisfied these demands. The result of this development is summarized as an R package. This publication has been supported by AGRARKLIMA.2 VKSZ_12-1-2013-0034 project. References Gálos B., Antal V., Czimber K., Mátyás Cs. (2014) Forest ecosystems, sewage works and droughts - possibilities for climate change adaptation. In: Santamarta J.C., Hernandez-Gutiérrez L.E., Arraiza M.P. (eds) 2014. Natural Hazards and Climate Change/Riesgos Naturales y Cambio Climático. Madrid: Colegio de Ingenieros de Montes. ISBN 978-84-617-1060-7, D.L. TF 565-2014, 91-104 pp Szilágyi J., Kovács Á. (2011) A calibration-free evapotranspiration mapping technique for spatially-distributed regional-scale hydrologic modeling, J. Hydrol. Hydromech., 59, 2011, 2, 118-130.

  13. On the Maas problem of seawater intrusion combated by infiltration

    NASA Astrophysics Data System (ADS)

    Kacimov, A. R.

    2008-09-01

    SummaryThe problem of Maas [Maas, K. 2007. Influence of climate change on a Ghijben-Herzberg lens. J. Hydrol. 347, 223-228] for infiltration inflow into a porous flat-roofed fresh water lens floating on the interface of an ascending Darcian saline water flow is shown to be in exact match with the Polubarinova-Kochina [Polubarinova-Kochina, P.Ya., 1977. Theory of Ground Water Movement. Nauka, Moscow (in Russian)] problem for flow in a lens capped by a cambered phreatic surface with a uniform accretion. The Maas complex potential in the domain of a heavy saline water seeping beneath the lens corresponds to one of an ideal fluid flow past an elliptical cylinder that makes possible conversion of this potential into ascending-descending seepage flows with floating (but stagnant) DNAPL-LNAPL volumes. Similar matching is possible for the velocity potential of an axisymmetric flow past an ellipsoid and hydrostatic pressure of a stagnant NAPL body stored in a semi-ellipsoidal pond.

  14. Validation of catchment models for predicting land-use and climate change impacts. 2. Case study for a Mediterranean catchment

    NASA Astrophysics Data System (ADS)

    Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.

    1996-02-01

    Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.

  15. Comparing a simple methodology to evaluate hydrodynamic parameters with rainfall simulation experiments

    NASA Astrophysics Data System (ADS)

    Di Prima, Simone; Bagarello, Vincenzo; Bautista, Inmaculada; Burguet, Maria; Cerdà, Artemi; Iovino, Massimo; Prosdocimi, Massimo

    2016-04-01

    Studying soil hydraulic properties is necessary for interpreting and simulating many hydrological processes having environmental and economic importance, such as rainfall partition into infiltration and runoff. The saturated hydraulic conductivity, Ks, exerts a dominating influence on the partitioning of rainfall in vertical and lateral flow paths. Therefore, estimates of Ks are essential for describing and modeling hydrological processes (Zimmermann et al., 2013). According to several investigations, Ks data collected by ponded infiltration tests could be expected to be unusable for interpreting field hydrological processes, and particularly infiltration. In fact, infiltration measured by ponding give us information about the soil maximum or potential infiltration rate (Cerdà, 1996). Moreover, especially for the hydrodynamic parameters, many replicated measurements have to be carried out to characterize an area of interest since they are known to vary widely both in space and time (Logsdon and Jaynes, 1996; Prieksat et al., 1994). Therefore, the technique to be applied at the near point scale should be simple and rapid. Bagarello et al. (2014) and Alagna et al. (2015) suggested that the Ks values determined by an infiltration experiment carried applying water at a relatively large distance from the soil surface could be more appropriate than those obtained with a low height of water pouring to explain surface runoff generation phenomena during intense rainfall events. These authors used the Beerkan Estimation of Soil Transfer parameters (BEST) procedure for complete soil hydraulic characterization (Lassabatère et al., 2006) to analyze the field infiltration experiment. This methodology, combining low and high height of water pouring, seems appropriate to test the effect of intense and prolonged rainfall events on the hydraulic characteristics of the surface soil layer. In fact, an intense and prolonged rainfall event has a perturbing effect on the soil surface and, reasonably, it can better be represented by the high runs than the low runs (Alagna et al., 2015). Obviously, this methodology is also simpler than an approach involving soil characterization both before and after natural or simulated rainfall since it needs less equipment and field work. On the other hand, rainfall simulation experiments are more realistic and accurate, but also more sophisticated and costly (Cerdà, 1997). Rainfall simulation is often used to measure the infiltration process (e.g., Bhardwaj and Singh, 1992; Cerdà, 1999, 1997, 1996; Cerdà and Doerr, 2007; Iserloh et al., 2013; Liu et al., 2011; Tricker, 1979), and it has become an important method for assessing the subjects of soil erosion and soil hydrological processes (Iserloh et al., 2013). Its application allows a quick, specific and reproducible assessment of the meaning and impact of several factors, such as slope, soil type (infiltration, permeability), soil moisture, splash effect of raindrops (aggregate stability), surface structure, vegetation cover and vegetation structure (Bowyer-Bower and Burt, 1989). The objectives of this investigation are: (i) to compare infiltration rates measured by applying water at a relatively large distance from the soil surface with those obtained by rainfall simulation experiments and (ii) to verify if the Ks values determined with the BEST procedure are in line with the occurrence of runoff measured with a more robust methodology. Acknowledgements The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 603498 (RECARE project). References Alagna, V., Bagarello, V., Di Prima, S., Giordano, G., Iovino, M., 2015. Testing infiltration run effects on the estimated hydrodynamic parameters of a sandy-loam soil. Submitted to Geoderma. Bagarello, V., Castellini, M., Di Prima, S., Iovino, M., 2014. Soil hydraulic properties determined by infiltration experiments and different heights of water pouring. Geoderma 213, 492-501. doi:10.1016/j.geoderma.2013.08.032 Bhardwaj, A., Singh, R., 1992. Development of a portable rainfall simulator infiltrometer for infiltration, runoff and erosion studies. Agricultural Water Management 22, 235-248. doi:10.1016/0378-3774(92)90028-U Bouwer, H., 1966. Rapid field measurement of air entry value and hydraulic conductivity of soil as significant parameters in flow system analysis. Water Resour. Res. 2, 729-738. doi:10.1029/WR002i004p00729 Bowyer-Bower, T.A.S., Burt, T.P., 1989. Rainfall simulators for investigating soil response to rainfall. Soil Technology 2, 1-16. doi:10.1016/S0933-3630(89)80002-9 Cerdà, A., 1999. Simuladores de lluvia y su aplicación a la Geomorfologia: estado de la cuestión. Cuadernos de investigación geográfica 45-84. Cerdà, A., 1997. Seasonal changes of the infiltration rates in a Mediterranean scrubland on limestone. Journal of Hydrology 198, 209-225. doi:10.1016/S0022-1694(96)03295-7 Cerdà, A., 1996. Seasonal variability of infiltration rates under contrasting slope conditions in southeast Spain. Geoderma 69, 217-232. doi:10.1016/0016-7061(95)00062-3 Cerdà, A., Doerr, S.H., 2007. Soil wettability, runoff and erodibility of major dry-Mediterranean land use types on calcareous soils. Hydrol. Process. 21, 2325-2336. doi:10.1002/hyp.6755 Iserloh, T., Ries, J.B., Arnáez, J., Boix-Fayos, C., Butzen, V., Cerdà, A., Echeverría, M.T., Fernández-Gálvez, J., Fister, W., Geißler, C., Gómez, J.A., Gómez-Macpherson, H., Kuhn, N.J., Lázaro, R., León, F.J., Martínez-Mena, M., Martínez-Murillo, J.F., Marzen, M., Mingorance, M.D., Ortigosa, L., Peters, P., Regüés, D., Ruiz-Sinoga, J.D., Scholten, T., Seeger, M., Solé-Benet, A., Wengel, R., Wirtz, S., 2013. European small portable rainfall simulators: A comparison of rainfall characteristics. CATENA 110, 100-112. doi:10.1016/j.catena.2013.05.013 Lassabatère, L., Angulo-Jaramillo, R., Soria Ugalde, J.M., Cuenca, R., Braud, I., Haverkamp, R., 2006. Beerkan Estimation of Soil Transfer Parameters through Infiltration Experiments - BEST. Soil Science Society of America Journal 70, 521. doi:10.2136/sssaj2005.0026 Liu, H., Lei, T.W., Zhao, J., Yuan, C.P., Fan, Y.T., Qu, L.Q., 2011. Effects of rainfall intensity and antecedent soil water content on soil infiltrability under rainfall conditions using the run off-on-out method. Journal of Hydrology 396, 24-32. doi:10.1016/j.jhydrol.2010.10.028 Logsdon, S.D., Jaynes, D.B., 1996. Spatial Variability of Hydraulic Conductivity in a Cultivated Field at Different Times. Soil Science Society of America Journal 60, 703. doi:10.2136/sssaj1996.03615995006000030003x Prieksat, M.A., Kaspar, T.C., Ankeny, M.D., 1994. Positional and Temporal Changes in Ponded Infiltration in a Corn Field. Soil Science Society of America Journal 58, 181. doi:10.2136/sssaj1994.03615995005800010026x Tricker, A.S., 1979. The design of a portable rainfall simulator infiltrometer. Journal of Hydrology 41, 143-147. doi:10.1016/0022-1694(79)90111-2 van De Giesen, N.C., Stomph, T.J., de Ridder, N., 2000. Scale effects of Hortonian overland flow and rainfall-runoff dynamics in a West African catena landscape. Hydrol. Process. 14, 165-175. doi:10.1002/(SICI)1099-1085(200001)14:1<165::AID-HYP920>3.0.CO;2-1 Zimmermann, A., Schinn, D.S., Francke, T., Elsenbeer, H., Zimmermann, B., 2013. Uncovering patterns of near-surface saturated hydraulic conductivity in an overland flow-controlled landscape. Geoderma 195-196, 1-11. doi:10.1016/j.geoderma.2012.11.002

  16. Extreme flood estimation by the SCHADEX method in a snow-driven catchment: application to Atnasjø (Norway)

    NASA Astrophysics Data System (ADS)

    Paquet, Emmanuel; Lawrence, Deborah

    2013-04-01

    The SCHADEX method for extreme flood estimation was developed by Paquet et al. (2006, 2013), and since 2008, it is the reference method used by Electricité de France (EDF) for dam spillway design. SCHADEX is a so-called "semi-continuous" stochastic simulation method in that flood events are simulated on an event basis and are superimposed on a continuous simulation of the catchment saturation hazard usingrainfall-runoff modelling. The MORDOR hydrological model (Garçon, 1999) has thus far been used for the rainfall-runoff modelling. MORDOR is a conceptual, lumped, reservoir model with daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt, and routing. The model has been intensively used at EDF for more than 15 years, in particular for inflow forecasts for French mountainous catchments. SCHADEX has now also been applied to the Atnasjø catchment (463 km²), a well-documented inland catchment in south-central Norway, dominated by snowmelt flooding during spring/early summer. To support this application, a weather pattern classification based on extreme rainfall was first established for Norway (Fleig, 2012). This classification scheme was then used to build a Multi-Exponential Weather Pattern distribution (MEWP), as introduced by Garavaglia et al. (2010) for extreme rainfall estimation. The MORDOR model was then calibrated relative to daily discharge data for Atnasjø. Finally, a SCHADEX simulation was run to build a daily discharge distribution with a sufficient number of simulations for assessing the extreme quantiles. Detailed results are used to illustrate how SCHADEX handles the complex and interacting hydrological processes driving flood generation in this snow driven catchment. Seasonal and monthly distributions, as well as statistics for several thousand simulated events reaching a 1000 years return level value and assessment of snowmelt role in extreme floods are presented. This study illustrates the complexity of the extreme flood estimation in snow driven catchments, and the need for a good representation of snow accumulation and melting processes in simulations for design flood estimations. In particular, the SCHADEX method is able to represent a range of possible catchment conditions (representing both soil moisture and snowmelt) in which extreme flood events can occur. This study is part of a collaboration between NVE and EDF, initiated within the FloodFreq COST Action (http://www.cost-floodfreq.eu/). References: Fleig, A., Scientific Report of the Short Term Scientific Mission Anne Fleig visiting Électricité de France, FloodFreq COST action - STSM report, 2012 Garavaglia, F., Gailhard, J., Paquet, E., Lang, M., Garçon, R., and Bernardara, P., Introducing a rainfall compound distribution model based on weather patterns sub-sampling, Hydrol. Earth Syst. Sci., 14, 951-964, doi:10.5194/hess-14-951-2010, 2010 Garçon, R. Modèle global pluie-débit pour la prévision et la prédétermination des crues, La Houille Blanche, 7-8, 88-95. doi: 10.1051/lhb/1999088 Paquet, E., Gailhard, J. and Garçon, R. (2006), Evolution of the GRADEX method: improvement by atmospheric circulation classification and hydrological modeling, La Houille Blanche, 5, 80-90. doi: 10.1051/lhb/2006091 Paquet, E., Garavaglia, F., Garçon, R. and Gailhard, J. (2012), The SCHADEX method: a semi-continuous rainfall-runoff simulation for extreme food estimation, Journal of Hydrology, under revision

  17. visCOS: An R-package to evaluate model performance of hydrological models

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a pivotal tool in model evaluation. They allow inferences about different systematic model-shortcomings and are an efficient way for communicating these in practice (Schulz et al., 2015). The evaluation and construction of such water balances is implemented with the presented package. During the (manual) calibration of a model or in the scope of model development, many model runs and iterations are necessary. Thus, users are often interested in comparing different model results in a visual way in order to learn about the model and to analyse parameter-changes on the output. A method to illuminate these differences and the evolution of changes is also included. References: • Gupta, H.V.; Wagener, T.; Liu, Y. (2008): Reconciling theory with observations: elements of a diagnostic approach to model evaluation, Hydrol. Process. 22, doi: 10.1002/hyp.6989. • Klemeš, V. (1986): Operational testing of hydrological simulation models, Hydrolog. Sci. J., doi: 10.1080/02626668609491024. • Kling, H.; Stanzel, P.; Fuchs, M.; and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956. • Schulz, K., Herrnegger, M., Wesemann, J., Klotz, D. Senoner, T. (2015): Kalibrierung COSERO - Mur für Pro Vis, Verbund Trading GmbH (Abteilung STG), final report, Institute of Water Management, Hydrology and Hydraulic Engineering, University of Natural Resources and Applied Life Sciences, Vienna, Austria, 217pp. • Zambrano-Bigiarini, M; Bellin, A. (2010): Comparing Goodness-of-fit Measures for Calibration of Models Focused on Extreme Events. European Geosciences Union (EGU), Geophysical Research Abstracts 14, EGU2012-11549-1.

  18. Simulating a Lowland Flash Flood in a Long-term Experimental Watershed with 7 Standard Hydrological Models

    NASA Astrophysics Data System (ADS)

    Torfs, P.; Brauer, C.; Teuling, R.; Kloosterman, P.; Willems, G.; Verkooijen, B.; Uijlenhoet, R.

    2012-12-01

    On 26 August 2010 the 6.5 km2 Hupsel Brook catchment in The Netherlands, which has been the experimental watershed employed by Wageningen University since the 1960s, was struck by an exceptionally heavy rainfall event (return period > 1000 years). We investigated the unprecedented flash flood triggered by this event and this study improved our understanding of the dynamics of such lowland flash floods (Brauer et al., 2011). During this extreme event some thresholds became apparent that do not play a role during average conditions and are not incorporated in most rainfall-runoff models. This may lead to errors when these models are used to forecast runoff responses to rainfall events that are extreme today, but likely to become less extreme when climate changes. The aim of this research project was to find out to what extent different types of rainfall-runoff models are able to simulate this extreme event, and, if not, which processes, thresholds or parameters are lacking to describe the event accurately. Five of the 7 employed models treat the catchment as a lumped system. This group includes the well-known HBV and Sacramento models. The Wageningen Model, which has been developed in our group, has a structure similar to HBV and the Sacramento Model. The SWAP (Soil, Water, Atmosphere, Plant) Model represents a physically-based model of a single soil column, but has been used here as a representation for the whole catchment. The LGSI (Lowland Groundwater Surface water Interaction) Model uses probability distributions to account for spatial variability in groundwater depth and resulting flow routes in the catchment. We did not only analyze how accurately each model simulated the discharge, but also whether groundwater and soil moisture dynamics and resulting flow processes were captured adequately. The 6th model is a spatially distributed model called SIMGRO. It is based on a MODFLOW groundwater model, extended with an unsaturated zone based on the previously mentioned SWAP model and a surface water network. This model has a very detailed groundwater-surface water interface and should therefore be particularly suitable to study the effect of backwater feedbacks we observed during the flood. In addition, the effect of spatially varying soil characteristics on the runoff response has been studied. The final model is SOBEK, which was originally developed as a hydraulic model consisting of a surface water network with nodes and links. To some of the nodes, upstream areas with associated rainfall-runoff models have been assigned. This model is especially useful to study the effect of hydraulic structures, such as culverts, and stream bed vegetation on dampening the flood peak. Brauer, C. C., Teuling, A.J., Overeem, A., van der Velde, Y., Hazenberg, P., Warmerdam, P. M. M. and Uijlenhoet, R.: Anatomy of extraordinary rainfall and flash flood in a Dutch lowland catchment, Hydrol. Earth Syst. Sci., 15, 1991-2005, 2011.

  19. Solute transport in crystalline rocks at Aspö--I: geological basis and model calibration.

    PubMed

    Mazurek, Martin; Jakob, Andreas; Bossart, Paul

    2003-03-01

    Water-conducting faults and fractures were studied in the granite-hosted Aspö Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours-days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption K(d)s are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced. Copyright 2002 Elsevier Science B.V.

  20. Linking hydro-climate to the sediment archive: a combined monitoring and calibration study from a varved lake in central Turkey

    NASA Astrophysics Data System (ADS)

    Roberts, C. Neil; Dean, Jonathan R.; Eastwood, Warren J.; Jones, Matthew D.; Allcock, Samantha L.; Leng, Melanie J.; Metcalfe, Sarah E.; Woodbridge, Jessie; Yiǧitbaşıoǧlu, Hakan

    2016-04-01

    Hydro-climatic reconstructions from lake sediment proxies require an understanding of modern formation processes and calibration over multiple years. Here we use Nar Gölü, a non-outlet, monomictic maar lake in central Turkey, as a field site for such a natural experiment. Fieldwork since 1997 has included observations and measurements of lake water and sediment trap samples, and automated data logging (Jones et al., 2005; Woodbridge and Roberts, 2010; Dean et al., 2015). We compare these data to isotopic, chemical and biotic proxies preserved in the lake's annually-varved sediments. Nar Gölü underwent a 3 m lake-level fall between 2000 and 2010, and δ18O in both water and carbonates is correlated with this lake-level fall, responding to the change in water balance. Over the same period, sedimentary diatom assemblages responded via changes in habitat availability and mixing regime, while conductivity inferred from diatoms showed a rise in inferred salinity, although with a non-linear response to hydro-climatic forcing. There were also non-linear shifts in carbonate mineralogy and elemental chemistry. Building on the relationship between lake water balance and the sediment isotope record, we calibrated sedimentary δ18O against local meteorological records to derive a P/E drought index for central Anatolia. Application to of this to the longer sediment core isotope record from Nar Gölü (Jones et al. 2006) highlights major drought events over the last 600 years (Yiǧitbaşıoǧlu et al., 2015). Although this lacustrine record offers an archive of annually-dated, decadally-averaged hydro-climatic change, there were also times of non-linear lake response to climate. Robust reconstruction therefore requires understanding of physical processes as well as application of statistical correlations. Dean, J.R., Eastwood, W.J., Roberts, N., Jones, M.D., Yiǧitbaşıoǧlu, H., Allcock, S.L., Woodbridge, J., Metcalfe, S.E. and Leng, M.J. (2015) Tracking the hydro-climatic signal from lake to sediment: a field study from central Turkey, J. Hydrol. 529, 608-621. Jones, MD, Leng, MJ, Roberts, CN, Turkes, M, Moyeed, R (2005) A coupled calibration and modelling approach to the understanding of dry-land lake oxygen isotope records. J Paleolimnol 34: 391-411 Jones, M.D., Roberts, N., Leng, M.J. and Türkeş, M. (2006) A high-resolution late Holocene lake isotope record from Turkey and links to North Atlantic and monsoon climate. Geology 34 (5), 361-364. Woodbridge, J, & Roberts, N (2010) Linking neo- and palaeolimnology: a case study using crater lake diatoms from central Turkey. J Paleolimnol 44: 855-871 Yiǧitbaşıoǧlu, H., Dean, J.R., Eastwood, W.J., Roberts, N., Jones, M.D. and Leng, M.J. (2015) A 600 year-long drought index for central Anatolia. Journal of Black Sea/Mediterranean Environment, Special Issue: 84-88

  1. Comment on ;Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods; [J. Hydrol., 546, 437-449, 10.1016/j.jhydrol.2017.01.025

    NASA Astrophysics Data System (ADS)

    Barati, Reza

    2017-07-01

    Perumal et al. (2017) compared the performances of the variable parameter McCarthy-Muskingum (VPMM) model of Perumal and Price (2013) and the nonlinear Muskingum (NLM) model of Gill (1978) using hypothetical inflow hydrographs in an artificial channel. As input parameters, first model needs the initial condition, upstream boundary condition, Manning's roughness coefficient, length of the routing reach, cross-sections of the river reach and the bed slope, while the latter one requires the initial condition, upstream boundary condition and the hydrologic parameters (three parameters which can be calibrated using flood hydrographs of the upstream and downstream sections). The VPMM model was examined by available Manning's roughness values, whereas the NLM model was tested in both calibration and validation steps. As final conclusion, Perumal et al. (2017) claimed that the NLM model should be retired from the literature of the Muskingum model. While the author's intention is laudable, this comment examines some important issues in the subject matter of the original study.

  2. Coexpression of β-D-galactosidase and L-arabinose isomerase in the production of D-tagatose: a functional sweetener.

    PubMed

    Zhan, Yijing; Xu, Zheng; Li, Sha; Liu, Xiaoliu; Xu, Lu; Feng, Xiaohai; Xu, Hong

    2014-03-19

    The functional sweetener, d-tagatose, is commonly transformed from galactose by l-arabinose isomerase. To make use of a much cheaper starting material, lactose, hydrolization, and isomerization are required to take place collaboratively. Therefore, a single-step method involving β-d-galactosidase was explored for d-tagatose production. The two vital genes, β-d-galactosidase gene (lacZ) and l-arabinose isomerase mutant gene (araA') were extracted separately from Escherichia coli strains and incorporated into E. coli simultaneously. This gave us E. coli-ZY, a recombinant producing strain capable of coexpressing the two key enzymes. The resulted cells exhibited maximum d-tagatose producing activity at 34 °C and pH 6.5 and in the presence of borate, 10 mM Fe(2+), and 1 mM Mn(2+). Further monitoring showed that the recombinant cells could hydrolyze more than 95% lactose and convert 43% d-galactose into d-tagatose. This research has verified the feasibility of single-step d-tagatose fermentation, thereby laying down the foundation for industrial usage of lactose.

  3. Corrigendum to ;Diurnal dynamics of minor and trace elements in stream water draining Dongkemadi Glacier on the Tibetan Plateau and its environmental implications; [J. Hydrol. 541 (2016) 1104-1118

    NASA Astrophysics Data System (ADS)

    Li, Xiangying; He, Xiaobo; Kang, Shichang; Mika, Sillanpää; Ding, Yongjian; Han, Tianding; Wu, Qingbai; Yu, Zhongbo

    2017-12-01

    The authors regret: At the Dongkemadi Glacier (DG) basin, daily and annual meltwater discharge at gauging section S1 should be corrected. Namely, annual discharge should be 2.74 × 107 m3 throughout 1 June to 30 September of 2013. Thus, variation in solute exports is controlled by changes in discharge and specific solute concentration (Fig. 9), and the estimated solute export, cation denudation rates (CDR) and discharge-normalized CDR are 417 tons, 185 Σ∗meq+m-2 and 189 Σ∗meq+m-3 (with annual specific discharge of 0.98 m) respectively in 2013 (Table 4). In comparison, the CDR at the DG basin is within the scope of previously published CDR (94-4200 Σ∗meq+m-2) from glacial catchments (Hodson et al., 2010). The discharge-normalized CDR is lower than the rates from most glacial catchments, but is higher than those from Mittivakkat (Greenland), S Cascade (N American) and Lewis River (Arctic) (Yde et al., 2004, 2014; Hodson et al., 2000, 2010).

  4. Cloning, Expression and Characterization of a Novel Thermophilic Polygalacturonase from Caldicellulosiruptor bescii DSM 6725

    PubMed Central

    Chen, Yanyan; Sun, Dejun; Zhou, Yulai; Liu, Liping; Han, Weiwei; Zheng, Baisong; Wang, Zhi; Zhang, Zuoming

    2014-01-01

    We cloned the gene ACM61449 from anaerobic, thermophilic Caldicellulosiruptor bescii, and expressed it in Escherichia coli origami (DE3). After purification through thermal treatment and Ni-NTA agarose column extraction, we characterized the properties of the recombinant protein (CbPelA). The optimal temperature and pH of the protein were 72 °C and 5.2, respectively. CbPelA demonstrated high thermal-stability, with a half-life of 14 h at 70 °C. CbPelA also showed very high activity for polygalacturonic acid (PGA), and released monogalacturonic acid as its sole product. The Vmax and Km of CbPelA were 384.6 U·mg−1 and 0.31 mg·mL−1, respectively. CbPelA was also able to hydrolyze methylated pectin (48% and 10% relative activity on 20%–34% and 85% methylated pectin, respectively). The high thermo-activity and methylated pectin hydrolization activity of CbPelA suggest that it has potential applications in the food and textile industry. PMID:24705464

  5. Hydrodynamic caracterisation of an heterogeneous aquifer system under semi-arid climate

    NASA Astrophysics Data System (ADS)

    Drias, T.; Toubal, A. Ch

    2009-04-01

    The studied zone is a part of the Mellegne's (North-East of Algeria) under pound, this zone is characterised by its semi-arid climate. The water bearing system is formed by the plio-quaternairy alluviums resting on a marley substratuim of age Eocene. The geostatiscitcs approach of the hydrodynamics parameters (Hydrolic load, transmisivity) allowed the study of their spatial distrubution (casting) by the method of Krigeage by blocks and the identification of zones with water-bearing potentialities. In this respect, the zone of Ain Chabro which, is situated in the South of the plain shows the best values of the transmisivity...... The use of a bidimensinnel model in the differences ended in the permanent regime allowed us to establish the global balence sheet (overall assessment) of the tablecloth and to refine the transmisivity field. These would vary more exactley between 10-4 to 10-2 m²/s. The method associating the probability appraoch of Krigeage to that determining the model has facilited the wedging of the model and clarified the inflitration value. Keys words: hydrodynamics, geostatiscitcs, Modeling, Chabro, Tébessa.

  6. Land surface controls on afternoon precipitation diagnosed from observational data: Uncertainties, confounding factors and the possible role of interception storage

    NASA Astrophysics Data System (ADS)

    Guillod, B. P.; Orlowsky, B.; Seneviratne, S. I.

    2013-12-01

    The feedback between soil moisture and precipitation has long been a topic of interest due to its potential for improving seasonal forecasts. The generally proposed feedbacks assume a control of soil moisture on the flux partitioning (i.e. the Evaporative Fraction, EF) at the land surface, which then influences precipitation. Our study (Guillod et al., in prep) addresses the poorly understood link between EF and precipitation by investigating the impact of before-noon EF on the frequency of afternoon precipitation over the contiguous US. We analyze remote sensing data products (EF from GLEAM, Miralles et al. 2011; radar precipitation from NEXRAD), FLUXNET station data, and the North American Regional Reanalysis (NARR). While most datasets agree on the existence of a region of positive relationship between between EF and precipitation in the Eastern US (e.g. Findell et al. 2011), observation-based estimates indicate a stronger relationship in the Western US, which is not found in NARR. Investigating these differences, we find that much of these relationships can be explained by precipitation persistence alone, with ambiguous results on the additional role of EF. Regional analyses reveal contrasting mechanisms over different regions which fit well with the known distribution of vegetation cover and soil moisture-climate regimes. Over the Eastern US, our analyses suggest that the EF-precipitation feedback, if present, takes place on a short day-to-day time scale, where interception evaporation drives the relationship rather than soil moisture, due to the high forest cover and the wet regime. Over the Western US, the impact of EF on convection triggering is additionally linked to soil moisture variations, owing to the soil moisture-limited climate regime. References: Findell, K. L., et al., 2011: Probability of afternoon precipitation in eastern United States and Mexico enhanced by high evaporation. Nature Geosci., 4 (7), 434-439, doi:10.1038/ngeo1174, URL http://www.nature.com/doifinder/10.1038/ngeo1174. Guillod, B. P., et al.: 'Land surface controls on afternoon precipitation diagnosed from observational data: Uncertainties, confounding factors and the possible role of interception storage', manuscript in preparation. Miralles, D. G., et al., 2011: Global land-surface evaporation estimated from satellite-based observations. Hydrol. Earth Syst. Sci., 15 (2), 453-469, doi:10.5194/hess-15-453-2011, URL http://www.hydrol-earth-syst-sci.net/15/453/2011/. Acknowledgement: We thank a number of people for their comments and contributions: Diego Miralles, Kirsten Findell, Adriaan Teuling, Nina Buchmann, Philippe Ciais, Bart Van den Hurk, Pierre Gentine, Benjamin Lintner, Markus Reichstein, Han Dolman and PIs from the used Fluxnet sites as well as the FLUXNET community.

  7. Intercomparison of DEM-based approaches for the identification of flood-prone areas in different geomorphologic and climatic conditions

    NASA Astrophysics Data System (ADS)

    Samela, Caterina; Nardi, Fernando; Grimaldi, Salvatore; De Paola, Francesco; Sole, Aurelia; Manfreda, Salvatore

    2014-05-01

    Floods represent the most critical natural hazard for many countries and their frequency appears to be increasing in recent times. The legal constraints of public administrators and the growing interest of private companies (e.g., insurance companies) in identifying the areas exposed to the flood risk, is determining the necessity of developing new tools for the risk classification over large areas. Nowadays, among the numerous hydrologic and hydraulic methods regularly used for practical applications, 2-D hydraulic modeling represents the most accurate approach for deriving detailed inundation maps. Nevertheless, data requirement for these modeling approaches is certainly onerous, limiting their applicability over large areas. On this issue, the terrain morphology may provide an extraordinary amount of information useful to detect areas that are particularly prone to serious flooding. In the present work, we compare the reliability of different DEM-derived quantitative morphologic descriptors in characterizing the relationships between geomorphic attributes and flood exposure. The tests are carried out using techniques of pattern classification, such as linear binary classifiers (Degiorgis et al., 2012), whose ability is evaluated through performance measures. Simple and composed morphologic features are taken into account. The morphological features are: the upslope contributing area (A), the local slope (S), the length of the path that hydrologically connects the location under exam to the nearest element of the drainage network (D), the difference in elevation between the cell under exam and the final point of the same path (H), the curvature (downtriangle2H). In addition to the mentioned features, the study takes into consideration a number of composed indices, such as: the modified topographic index (Manfreda et al., 2011), the downslope index (DI) proposed by Hjerdt et al. (2004), the ratio between the elevation difference H and the distance to the network D, and other indices. Each binary classifier is applied in several catchments in order to verify the reproducibility of the procedures in different geomorphologic, climatic and hydrologic conditions. The study explores the use of these procedures in gauged river basins located in Italy and in an ungauged basin located in Africa. References Degiorgis, M., G. Gnecco, S. Gorni, G. Roth, M. Sanguineti, A.C. Taramasso, 2012. Classifiers for the detection of flood-prone areas using remote sensed elevation data, J. Hydrol., 470-471, 302-315. Hjerdt, K. N., J.J. McDonnell, J. Seibert, A. Rodhe, A new topographic index to quantify downslope controls on local drainage, Water Resour. Res., 40, W05602, 2004. Manfreda, S., M. Di Leo, A. Sole, Detection of Flood Prone Areas using Digital Elevation Models, J. Hydrol. Eng., 16(10), 781-790, 2011.

  8. Towards Flange-to-Flange Turbopump Simulations for Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Williams, Robert

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbopump geometry through numerical simulation will be of significant value toward design. This will help to improve safety of future space missions. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the MPI and MLP versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology. INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbopump simulations, moving boundary capability and an efficient time-accurate integration methods are build in the flow solver. To handle the geometric complexity and moving boundary problems, overset grid scheme is incorporated with the solver that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. The performance of the two time integration schemes for time-accurate computations is investigated. For an unsteady flow which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive. The current geometry for the LOX boost turbopump has various rotating and stationary components, such as inducer, stators, kicker, hydrolic turbine, where the flow is extremely unsteady. Figure 1 shows the geometry and computed surface pressure of the inducer. The inducer and the hydrolic turbine rotate in different rotational speed.

  9. The problem with simple lumped parameter models: Evidence from tritium mean transit times

    NASA Astrophysics Data System (ADS)

    Stewart, Michael; Morgenstern, Uwe; Gusyev, Maksym; Maloszewski, Piotr

    2017-04-01

    Simple lumped parameter models (LPMs) based on assuming homogeneity and stationarity in catchments and groundwater bodies are widely used to model and predict hydrological system outputs. However, most systems are not homogeneous or stationary, and errors resulting from disregard of the real heterogeneity and non-stationarity of such systems are not well understood and rarely quantified. As an example, mean transit times (MTTs) of streamflow are usually estimated from tracer data using simple LPMs. The MTT or transit time distribution of water in a stream reveals basic catchment properties such as water flow paths, storage and mixing. Importantly however, Kirchner (2016a) has shown that there can be large (several hundred percent) aggregation errors in MTTs inferred from seasonal cycles in conservative tracers such as chloride or stable isotopes when they are interpreted using simple LPMs (i.e. a range of gamma models or GMs). Here we show that MTTs estimated using tritium concentrations are similarly affected by aggregation errors due to heterogeneity and non-stationarity when interpreted using simple LPMs (e.g. GMs). The tritium aggregation error series from the strong nonlinearity between tritium concentrations and MTT, whereas for seasonal tracer cycles it is due to the nonlinearity between tracer cycle amplitudes and MTT. In effect, water from young subsystems in the catchment outweigh water from old subsystems. The main difference between the aggregation errors with the different tracers is that with tritium it applies at much greater ages than it does with seasonal tracer cycles. We stress that the aggregation errors arise when simple LPMs are applied (with simple LPMs the hydrological system is assumed to be a homogeneous whole with parameters representing averages for the system). With well-chosen compound LPMs (which are combinations of simple LPMs) on the other hand, aggregation errors are very much smaller because young and old water flows are treated separately. "Well-chosen" means that the compound LPM is based on hydrologically- and geologically-validated information, and the choice can be assisted by matching simulations to time series of tritium measurements. References: Kirchner, J.W. (2016a): Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments. Hydrol. Earth Syst. Sci. 20, 279-297. Stewart, M.K., Morgenstern, U., Gusyev, M.A., Maloszewski, P. 2016: Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems, and implications for past and future applications of tritium. Submitted to Hydrol. Earth Syst. Sci., 10 October 2016, doi:10.5194/hess-2016-532.

  10. The Irma-sponge Program: Methodologies For Sustainable Flood Risk Management Along The Rhine and Meuse Rivers

    NASA Astrophysics Data System (ADS)

    Hooijer, A.; van Os, A. G.

    Recent flood events and socio-economic developments have increased the awareness of the need for improved flood risk management along the Rhine and Meuse Rivers. In response to this, the IRMA-SPONGE program incorporated 13 research projects in which over 30 organisations from all 6 River Basin Countries co-operated. The pro- gram is financed partly by the European INTERREG Rhine-Meuse Activities (IRMA). The main aim of IRMA-SPONGE is defined as: "The development of methodologies and tools to assess the impact of flood risk reduction measures and of land-use and climate change scenarios. This to support the spatial planning process in establish- ing alternative strategies for an optimal realisation of the hydraulic, economical and ecological functions of the Rhine and Meuse River Basins." Further important objec- tives are to promote transboundary co-operation in flood risk management by both scientific and management organisations, and to promote public participation in flood management issues. The projects in the program are grouped in three clusters, looking at measures from different scientific angles. The results of the projects in each cluster have been evaluated to define recommendations for flood risk management; some of these outcomes call for a change to current practices, e.g.: 1. (Flood Risk and Hydrol- ogy cluster): hydrological changes due to climate change exceed those due to further land use change, and are significant enough to necessitate a change in flood risk man- agement strategies if the currently claimed protection levels are to be sustained. 2. (Flood Protection and Ecology cluster): to not only provide flood protection but also enhance the ecological quality of rivers and floodplains, new flood risk management concepts ought to integrate ecological knowledge from start to finish, with a clear perspective on the type of nature desired and the spatial and time scales considered. 3. (Flood Risk Management and Spatial Planning cluster): extreme floods can not be prevented by taking mainly upstream measures; significant and space-consuming lo- cal measures will therefore be needed in the lower Rhine and Meuse deltas. However, there is also a need for improved flood risk management upstream, which calls for better spatial planning procedures. More detailed information on the IRMA-SPONGE program can be found on our website: www.irma-sponge.org.

  11. Simulations and field observations of root water uptake in plots with different soil water availability.

    NASA Astrophysics Data System (ADS)

    Cai, Gaochao; Vanderborght, Jan; Couvreur, Valentin; Javaux, Mathieu; Vereecken, Harry

    2015-04-01

    Root water uptake is a main process in the hydrological cycle and vital for water management in agronomy. In most models of root water uptake, the spatial and temporal soil water status and plant root distributions are required for water flow simulations. However, dynamic root growth and root distributions are not easy and time consuming to measure by normal approaches. Furthermore, root water uptake cannot be measured directly in the field. Therefore, it is necessary to incorporate monitoring data of soil water content and potential and root distributions within a modeling framework to explore the interaction between soil water availability and root water uptake. But, most models are lacking a physically based concept to describe water uptake from soil profiles with vertical variations in soil water availability. In this contribution, we present an experimental setup in which root development, soil water content and soil water potential are monitored non-invasively in two field plots with different soil texture and for three treatments with different soil water availability: natural rain, sheltered and irrigated treatment. Root development is monitored using 7-m long horizontally installed minirhizotubes at six depths with three replicates per treatment. The monitoring data are interpreted using a model that is a one-dimensional upscaled version of root water uptake model that describes flow in the coupled soil-root architecture considering water potential gradients in the system and hydraulic conductances of the soil and root system (Couvreur et al., 2012). This model approach links the total root water uptake to an effective soil water potential in the root zone. The local root water uptake is a function of the difference between the local soil water potential and effective root zone water potential so that compensatory uptake in heterogeneous soil water potential profiles is simulated. The root system conductance is derived from inverse modelling using measurements of soil water potentials, water contents, and root distributions. The results showed that this modelling approach reproduced soil water dynamics well in the different plots and treatments. Root water uptake reduced when the effective soil water potential decreased to around -70 to -100 kPa in the root zone. Couvreur, V., Vanderborght, J., and Javaux, M.: A simple three dimensional macroscopic root water uptake model based on the hydraulic architecture approach, Hydrol. Earth Syst. Sci., 16, 2957-2971, doi:10.5194/hess-16-2957-2012, 2012.

  12. Antireflective graded index silica coating, method for making

    DOEpatents

    Yoldas, Bulent E.; Partlow, Deborah P.

    1985-01-01

    Antireflective silica coating for vitreous material is substantially non-reflecting over a wide band of radiations. This is achieved by providing the coating with a graded degree of porosity which grades the index of refraction between that of air and the vitreous material of the substrate. To prepare the coating, there is first prepared a silicon-alkoxide-based coating solution of particular polymer structure produced by a controlled proportion of water to alkoxide and a controlled concentration of alkoxide to solution, along with a small amount of catalyst. The primary solvent is alcohol and the solution is polymerized and hydrolized under controlled conditions prior to use. The prepared solution is applied as a film to the vitreous substrate and rapidly dried. It is thereafter heated under controlled conditions to volatilize the hydroxyl radicals and organics therefrom and then to produce a suitable pore morphology in the residual porous silica layer. The silica layer is then etched in order to enlarge the pores in a graded fashion, with the largest of the pores remaining being sufficiently small that radiations to be passed through the substrate are not significantly scattered. For use with quartz substrates, extremely durable coatings which display only 0.1% reflectivity have been prepared.

  13. Long-term effect of pH on short-chain fatty acids accumulation and microbial community in sludge fermentation systems.

    PubMed

    Yuan, Yue; Wang, Shuying; Liu, Ye; Li, Baikun; Wang, Bo; Peng, Yongzhen

    2015-12-01

    Long-term effect of pH (4, 10, and uncontrolled) on short-chain fatty acid (SCFA) accumulation, microbial community and sludge reduction were investigated in waste activated sludge (WAS) fermentors for over 90days. The average SCFAs accumulation was 1721.4 (at pH 10), 114.2 (at pH 4), and 58.1 (at uncontrolled pH) mg chemical oxygen demand (COD)/L. About 31.65mgCOD/L was produced at pH 10, accounting for 20% of the influent COD. Illumina MiSeq sequencing revealed that Alcaligenes (hydrolic bacteria) and Erysipelothrix (acidogenic bacteria) were enriched at pH 10, while less acidogenic bacteria existed at pH 4 than pH 10, and no acidogenic bacteria were detected at the uncontrolled pH. The ratios of archaea to bacteria were 1:41, 1:16, and 1:9 at the pH of 10, 4, and uncontrolled. This study elucidated the effects of pH on WAS fermentation, and established the correlation of microbial structure with SCFAs accumulations and sludge reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Analysis of the convective timescale during the major floods in the NE Iberian Peninsula since 1871

    NASA Astrophysics Data System (ADS)

    Pino, David; Reynés, Artur; Mazon, Jordi; Carles Balasch, Josep; Lluis Ruiz-Bellet, Josep; Tuset, Jordi; Barriendos, Mariano; Castelltort, Xavier

    2016-04-01

    Floods are the most severe natural hazard in the western Mediterranean basin. They cause most of the damages and most of the victims. Some of the selected floods caused more than one hundred casualties each and a large quantity of damages in infrastructures. In a previous work (Balasch, et al., 2015), using the PREDIFLOOD database (Barriendos et al., 2014) we studied the atmospheric conditions that occurred during some of the most important floods occurred in the north-east of the Iberian Peninsula in the last centuries: 1874, 1875, 1894, 1897, 1898, 1901, 1907, 1913, 1919, 1932, 1937, 1940, 1962, 1963, 1977, 1994, 1996, and 2000. We analyzed the atmospheric synoptic situations at the time of each flood from the data provided by NOAA 20th Century Reanalysis and we compared it to the rainfall spatial distributions obtained with the hydrological modeling. In this work we enlarge the previous investigation by analyzing the evolution of a convective index proposed by Done et al. (2006) and modified by Molini et al. (2011). This index, called convective time scale, is obtained from the evolution of CAPE and is used to separate equilibrium and non-equilibrium convection. In the former, CAPE generated by large-scale processes is balanced by the consumption due to convection. In the second case, CAPE is created by large-scale processes over a long time and is rapidly consumed during outbreaks of convection. Both situations produced a totally different evolution of CAPE with low and approximately constant values in the first case and large and variable values in the second. Additionally, from this index it can be estimated the rainfall rate. We use data provided by NOAA 20th Century Reanalysis, to calculate the convective time scale and to analyze its evolution and horizontal distribution. We study the correspondence between the convective timescale, the season when the flood occurred, duration of the rainfall, and the specific peak flow rate of the flood. Finally, for the most recent episodes rainfall rate estimation from the convective timescale is compared with the observations. Balasch, J. C., Ruiz-Bellet, J. L., Tuset, J., Barriendos, M., Mazón, J., Pino, D. and Castelltort, X.: Transdisciplinary and multiscale reconstruction of the major flash floods in NE Iberian Peninsula. EGU General Assembly, 2015. Barriendos, M., Ruiz--Bellet, J. L., Tuset, J., Mazon, J., Balasch, J. C., Pino, D., Ayala, J. L.: The "Prediflood" database of historical floods in Catalonia (NE Iberian Peninsula) AD 1035--2013, and its potential applications in flood analysis, Hydrol. Earth Syst. Sci., 18, 4807-4823, 2014. Done, J. M., Craig, G. C., Gray, S. L., Clark, P. A., and Gray, M. E. B.: Mesoscale simulations of organized convection: Importance of convective equilibrium, Q. J. Roy. Meteor. Soc., 132, 737-756, 2006. Molini, L., Parodi, A., Rebora, N. and Craig, G. C.: Classifying severe rainfall events over Italy by hydrometeorological and dynamical criteria, Q. J. Roy. Meteor. Soc., 137, 148-154, 2011.

  15. How important is the spatiotemporal structure of a rainfall field when generating a streamflow hydrograph? An investigation using Reverse Hydrology

    NASA Astrophysics Data System (ADS)

    Kretzschmar, Ann; Tych, Wlodek; Beven, Keith; Chappell, Nick

    2017-04-01

    Flooding is the most widely occurring natural disaster affecting thousands of lives and businesses worldwide each year, and the size and frequency of flood-events are predicted to increase with climate change. The main input-variable for models used in flood prediction is rainfall. Estimating the rainfall input is often based on a sparse network of raingauges, which may or may not be representative of the salient rainfall characteristics responsible for generating of storm-hydrographs. A method based on Reverse Hydrology (Kretzschmar et al 2014 Environ Modell Softw) has been developed and is being tested using the intensively-instrumented Brue catchment (Southwest England) to explore the spatiotemporal structure of the rainfall-field (using 23 rain gauges over the 135.2 km2 basin). We compare how well the rainfall measured at individual gauges, or averaged over the basin, represent the rainfall inferred from the streamflow signal. How important is it to get the detail of the spatiotemporal rainfall structure right? Rainfall is transformed by catchment processes as it moves to streams, so exact duplication of the structure may not be necessary. 'True' rainfall estimated using 23 gauges / 135.2 km2 is likely to be a good estimate of the overall-catchment-rainfall, however, the integration process 'smears' the rainfall patterns in time, i.e. reduces the number of and lengthens rain-events as they travel across the catchment. This may have little impact on the simulation of stream-hydrographs when events are extensive across the catchment (e.g., frontal rainfall events) but may be significant for high-intensity, localised convective events. The Reverse Hydrology approach uses the streamflow record to infer a rainfall sequence with a lower time-resolution than the original input time-series. The inferred rainfall series is, however, able simulate streamflow as well as the observed, high resolution rainfall (Kretzschmar et al 2015 Hydrol Res). Most gauged catchments in the UK of a similar size would only have data available for 1 to 3 raingauges. The high density of the Brue raingauge network allows a good estimate of the 'True' catchment rainfall to be made and compared with data from an individual raingauge as if that was the only data available. In addition the rainfall from each raingauge is compared with rainfall inferred from streamflow using data from the selected individual raingauge, and also inferred from the full catchment network. The stochastic structure of the rainfall from all of these datasets is compared using a combination of traditional statistical measures, i.e., the first 4 moments of rainfall totals and its residuals; plus the number, length and distribution of wet and dry periods; rainfall intensity characteristics; and their ability to generate the observed stream hydrograph. Reverse Hydrology, which utilises information present in both the input rainfall and the output hydrograph, has provided a method of investigating the quality of the information each gauge adds to the catchment-average (Kretzschmar et al 2016 Procedia Eng.). Further, it has been used to ascertain how important reproducing the detailed rainfall structure really is, when used for flow prediction.

  16. Development and assessment of an efficient vadose zone module solving the 1D Richards' equation and including root extraction by plants

    NASA Astrophysics Data System (ADS)

    Varado, N.; Braud, I.; Ross, P. J.

    2006-05-01

    From the non iterative numerical method proposed by [Ross, P.J., 2003. Modeling soil water and solute transport—fast, simplified numerical solutions. Agronomy Journal 95, 1352-1361] for solving the 1D Richards' equation, an unsaturated zone module for large scale hydrological model is developed by the inclusion of a root extraction module and a formulation of interception. Two root water uptake modules, first proposed by [Lai, C.-T. and Katul, G., 2000. The dynamic role of rott-water uptake in coupling potential to actual transpiration. Adv. Water Res. 23: 427-439; Li, K.Y., De Jong, R. and Boisvert, J.B., 2001. An exponential root-water-uptake model with water stress compensation. J. Hydrol. 252: 189-204], were included as the sink term in the Richards' equation. They express root extraction as a linear function of potential transpiration and take into account water stress and compensation mechanism allowing water to be extracted in wetter layers. The vadose zone module is tested in a systematic way with synthetic data sets covering a wide range of soil characteristics, climate forcing, and vegetation cover. A detailed SVAT model providing an accurate solution of the coupled heat and water transfer in the soil and the surface energy balance is used as a reference. The accuracy of the numerical solution using only the SVAT soil module, and the loss of accuracy when using a potential evapotranspiration instead of solving the energy budget are both investigated. The vadose zone module is very accurate with errors of less than a few percent for cumulative transpiration. Soil evaporation is less accurately simulated as it leads to a systematic underestimation of soil evaporation amounts. The [Lai, C.-T. and Katul, G., 2000. The dynamic role of rott-water uptake in coupling potential to actual transpiration. Adv. Water Res. 23: 427-439] module is not adapted for sandy soils, due to a weakness in the compensation term formulation. When using a potential evapotranspiration instead of the surface energy balance, we evidenced a difference in partitioning the energy between the soil and the vegetation. A Beer-Lambert law is not able to take into account the complex interactions at the soil-vegetation-atmopshere interface. However, under field conditions, the accuracy of the vadose zone module is satisfactory provided that a correct crop coefficient could be defined. As a conclusion the numerical method proposed by [Ross, P.J., 2003. Modeling soil water and solute transport—fast, simplified numerical solutions. Agronomy Journal 95, 1352-1361] coupled with the [Li, K.Y., De Jong, R. and Boisvert, J.B., 2001. An exponential root-water-uptake model with water stress compensation. J. Hydrol. 252: 189-204] root extraction module provides an efficient and accurate solution for inclusion as a physically-based infiltration-evapotranspiration module into larger scale watershed models.

  17. L-carnosine modulates respiratory burst and reactive oxygen species production in neutrophil biochemistry and function: may oral dosage form of non-hydrolized dipeptide L-carnosine complement anti-infective anti-influenza flu treatment, prevention and self-care as an alternative to the conventional vaccination?

    PubMed

    Babizhayev, Mark A; Deyev, Anatoliy I; Yegorov, Yegor E

    2014-05-01

    Influenza A is a viral disease of global dimension, presenting with high morbidity and mortality in annual epidemics, and in pandemics which are of infrequent occurrence but which have very high attack rates. Influenza vaccines of the future must be directed toward use of conserved group-specific viral antigens, such as are present in transitional proteins which are exposed during the fusion of virus to the host cell. Influenza probes revealed a continuing battle for survival between host and parasite in which the host population updates the specificity of its pool of humoral immunity by contact with and response to infection with the most recent viruses which possess altered antigenic specificity in their hemagglutinin (HA) ligand. It is well known that the HA protein is found on the surface of the influenza virus particle and is responsible for binding to receptors on host cells and initiating infection. Polymorphonuclear neutrophils (PMN) have been reported to be involved in the initial host response to influenza A virus (IAV). Early after IAV infection, neutrophils infiltrate the airway probably due to release of chemokines that attract PMN. Clearly, severe IAV infection is characterized by increased neutrophil influx into the lung or upper respiratory tract. Carnosine (β-alanyl-L-histidine) and anserine (N-β-alanyl-1-methyl-L-histidine) are found in skeletal muscle of most vertebrates, including those used for food; for example, 100 g of chicken breast contains 400 mg (17.6 mmol/L) of carnosine and 1020 mg (33.6 mmol/l) of anserine. Carnosine-stimulated respiratory burst in neutrophils is a universal biological mechanism of influenza virus destruction. Our own studies revealed previously unappreciated functional effects of carnosine and related histidine containing compounds as a natural biological prevention and barrier against Influenza virus infection, expand public understanding of the antiviral properties of imidazole-containing dipeptide based compounds, and suggest important interactions between neutrophills and carnosine related compounds in the host response to viruses and bacteria. Carnosine and anserine were also found to reduce apoptosis of human neutrophils. In this way these histidine-containing compounds can modulate the Influenza virus release from neutrophills and reduce virus dissemination through the body of the organism. This review points the ability of therapeutic control of Influenza viral infections associated with modulation by oral nonhydrolized forms of carnosine and related histidine-containg compounds of PMN apoptosis which may be involved at least in part in the pathophysiology of the disease in animals and humans. The data presented in this article, overall, may have implications for global influenza surveillance and planning for pandemic influenza therapeutic prevention with oral forms of non-hydrolized natural L-carnosine as a suitable alternative to the conventional vaccination for various flu ailments.

  18. Composite gel polymer electrolyte for lithium ion batteries

    NASA Astrophysics Data System (ADS)

    Naderi, Roya

    Composite gel polymer electrolyte (CGPE) films, consisting of poly (vinylidene fluoride-hexafluoropropylene) (PVdF-HFP) as the membrane, DMF and PC as solvent and plasticizing agent, mixture of charge modified TiO2 and SiO 2 nano particles as ionic conductors, and LiClO4+LiPF 6 as lithium salts were fabricated. Following the work done by Li et al., CGPE was coated on an O2-plasma treated trilayer polypropylene-polyethylene-polypropylene membrane separator using solution casting technique in order to improve the adhesive properties of gel polymer electrolyte to the separator membrane and its respective ionic conductivity due to decreasing the bulk resistance. In acidic CGPE with, the mixture of acid treated TiO2 and neutral SiO2 nano particles played the role of the charge modified nano fillers with enhanced hydroxyl groups. Likely, the mixture of neutral TiO 2 nano particles with basic SiO2 prepared through the hydrolization of tetraethyl orthosilicate (TEOS) provided a more basic environment due to the residues of NH4OH (Ammonium hydroxide) catalyst. The O2 plasma treated separator was coated with the solution of PVDF-HFP: modified nano fillers: Organic solvents with the mixture ratio of 0.1:0.01:1. After the evaporation of the organic solvents, the dried coated separator was soaked in PC-LiClO4+LiPF6 in EC: DMC:DEC (4:2:4 in volume) solution (300% wt. of PVDF-HFP) to form the final CGPE. Lim et al. has reported the enhanced ionic conductivity of 9.78*10-5 Scm-1 in an acidic composite polystyrene-Al2O3 solid electrolyte system with compared to that of basic and neutral in which the ionic conductivity undergoes an ion hopping process in solid interface rather than a segmental movement of ions through the plasticized polymer chain . Half-cells with graphite anode and Li metal as reference electrode were then assembled and the electrochemical measurements and morphology examinations were successfully carried out. Half cells demonstrated a considerable change in their electrochemical performance upon the enhancement of acidic properties of the CGPE, gaining the reversible specific capacity of 314 mAh.g-1 in acidic CGPE vs. 247 mAh.g-1 in basic CGPE C/20 after 33 cycles. The CGPE exhibited submicron pore size while the ionic conductivities were in order of 10-3 and 10-5 Scm-1 with and without modified nano-fillers respectively.

  19. Structural and Functional Connectivity from Unmanned-Aerial System Data

    NASA Astrophysics Data System (ADS)

    Masselink, Rens; Heckmann, Tobias; Casalí, Javier; Giménez, Rafael; Cerdá, Artemi; Keesstra, Saskia

    2017-04-01

    Over the past decade there has been an increase in both connectivity research and research involving Unmanned-Aerial systems (UASs). In some studies, UASs were successfully used for the assessment of connectivity, but not yet to their full potential. We present several ways to use data obtained from UASs to measure variables related to connectivity, and use these to assess both structural and functional connectivity. These assessments of connectivity can aid us in obtaining a better understanding of the dynamics of e.g. sediment and nutrient transport. We identify three sources of data obtained from a consumer camera mounted on a fixed-wing UAS, which can be used separately or combined: Visual and near-infrared imagery, point clouds, and digital elevation models (DEMs). Imagery (or: orthophotos) can be used for (automatic) mapping of connectivity features like rills, gullies and soil and water conservation measures using supervised or unsupervised classification methods with e.g. Object-Based Image Analysis. Furthermore, patterns of soil moisture in the top layers can be extracted from visual and near-infrared imagery. Point clouds can be analysed for vegetation height and density, and soil surface roughness. Lastly, DEMs can be used in combination with imagery for a number of tasks, including raster-based (e.g. DEM derivatives) and object-based (e.g., feature detection) analysis: Flow routing algorithms can be used to analyse potential pathways of surface runoff and sediment transport. This allows for the assessment of structural connectivity through indices that are based, for example, on morphometric and other properties of surfaces, contributing areas, and pathways. Third, erosion and deposition can be measured by calculating elevation changes from repeat surveys. From these "intermediate" variables like roughness, vegetation density and soil moisture, structural connectivity and functional connectivity can be assessed by combining them into a dynamic index of connectivity, use them in connectivity modelling (Masselink et al., 2016b) or be combined with measured data of water and sediment fluxes (Masselink et al., 2016a). References Masselink, R.J.H., Heckmann, T., Temme, A.J.A.M., Anders, N.S., Gooren, H.P.A., Keesstra, S.D., 2016a. A network theory approach for a better understanding of overland flow connectivity. Hydrol. Process. doi:10.1002/hyp.10993 Masselink, R.J.H., Keesstra, S.D., Temme, A.J.A.M., Seeger, M., Giménez, R., Casalí, J., 2016b. Modelling Discharge and Sediment Yield at Catchment Scale Using Connectivity Components. Land Degrad. Dev. 27, 933-945. doi:10.1002/ldr.2512

  20. Towards Reproducibility in Computational Hydrology

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Arheimer, Berit

    2017-04-01

    Reproducibility is a foundational principle in scientific research. The ability to independently re-run an experiment helps to verify the legitimacy of individual findings, and evolve (or reject) hypotheses and models of how environmental systems function, and move them from specific circumstances to more general theory. Yet in computational hydrology (and in environmental science more widely) the code and data that produces published results are not regularly made available, and even if they are made available, there remains a multitude of generally unreported choices that an individual scientist may have made that impact the study result. This situation strongly inhibits the ability of our community to reproduce and verify previous findings, as all the information and boundary conditions required to set up a computational experiment simply cannot be reported in an article's text alone. In Hutton et al 2016 [1], we argue that a cultural change is required in the computational hydrological community, in order to advance and make more robust the process of knowledge creation and hypothesis testing. We need to adopt common standards and infrastructures to: (1) make code readable and re-useable; (2) create well-documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; (3) make code and workflows available, easy to find, and easy to interpret, using code and code metadata repositories. To create change we argue for improved graduate training in these areas. In this talk we reflect on our progress in achieving reproducible, open science in computational hydrology, which are relevant to the broader computational geoscience community. In particular, we draw on our experience in the Switch-On (EU funded) virtual water science laboratory (http://www.switch-on-vwsl.eu/participate/), which is an open platform for collaboration in hydrological experiments (e.g. [2]). While we use computational hydrology as the example application area, we believe that our conclusions are of value to the wider environmental and geoscience community as far as the use of code and models for scientific advancement is concerned. References: [1] Hutton, C., T. Wagener, J. Freer, D. Han, C. Duffy, and B. Arheimer (2016), Most computational hydrology is not reproducible, so is it really science?, Water Resour. Res., 52, 7548-7555, doi:10.1002/2016WR019285. [2] Ceola, S., et al. (2015), Virtual laboratories: New opportunities for collaborative water science, Hydrol. Earth Syst. Sci. Discuss., 11(12), 13443-13478, doi:10.5194/hessd-11-13443-2014.

  1. Insights about data assimilation frameworks for integrating GRACE with hydrological models

    NASA Astrophysics Data System (ADS)

    Schumacher, Maike; Kusche, Jürgen; Van Dijk, Albert I. J. M.; Döll, Petra; Schuh, Wolf-Dieter

    2016-04-01

    Improving the understanding of changes in the water cycle represents a challenging objective that requires merging information from various disciplines. Debates exist on selecting an appropriate assimilation technique to integrate GRACE-derived terrestrial water storage changes (TWSC) into hydrological models in order to downscale and disaggregate GRACE TWSC, overcome model limitations, and improve monitoring and forecast skills. Yet, the effect of the specific data assimilation technique in conjunction with ill-conditioning, colored noise, resolution mismatch between GRACE and model, and other complications is still unclear. Due to its simplicity, ensemble Kalman filters or smoothers (EnKF/S) are often applied. In this study, we show that modification of the filter approach might open new avenues to improve the integration process. Particularly, we discuss an improved calibration and data assimilation (C/DA) framework (Schumacher et al., 2016), which is based on the EnKF and was extended by the square root analysis scheme (SQRA) and the singular evolutive interpolated Kalman (SEIK) filter. In addition, we discuss an off-line data blending approach (Van Dijk et al., 2014) that offers the chance to merge multi-model ensembles with GRACE observations. The investigations include: (i) a theoretical comparison, focusing on similarities and differences of the conceptual formulation of the filter algorithms, (ii) a practical comparison, for which the approaches were applied to an ensemble of runs of the WaterGAP Global Hydrology Model (WGHM), as well as (iii) an impact assessment of the GRACE error structure on C/DA results. First, a synthetic experiment over the Mississippi River Basin (USA) was used to gain insights about the C/DA set-up before applying it to real data. The results indicated promising performances when considering alternative methods, e.g. applying the SEIK algorithm improved the correlation coefficient and root mean square error (RMSE) of TWSC by 0.1 and 6 mm, with respect to the EnKF. We successfully transferred our framework to the Murray-Darling Basin (Australia), one of the largest and driest river basins over the world. Finally, we provide recommendations on an optimal C/DA strategy for real GRACE data integrations. Schumacher M, Kusche J, Döll P (2016): A Systematic Impact Assessment of GRACE Error Correlation on Data Assimilation in Hydrological Models. J Geod Van Dijk AIJM, Renzullo LJ, Wada Y, Tregoning P (2014): A global water cycle reanalysis (2003-2012) merging satellite gravimetry and altimetry observations with a hydrological multi-model ensemble. Hydrol Earth Syst Sci

  2. Laurel leaf extracts for honeybee pest and disease management: antimicrobial, microsporicidal, and acaricidal activity.

    PubMed

    Damiani, Natalia; Fernández, Natalia J; Porrini, Martín P; Gende, Liesel B; Álvarez, Estefanía; Buffa, Franco; Brasesco, Constanza; Maggi, Matías D; Marcangeli, Jorge A; Eguaras, Martín J

    2014-02-01

    A diverse set of parasites and pathogens affects productivity and survival of Apis mellifera honeybees. In beekeeping, traditional control by antibiotics and molecules of synthesis has caused problems with contamination and resistant pathogens. In this research, different Laurus nobilis extracts are tested against the main honeybee pests through an integrated point of view. In vivo effects on bee survival are also evaluated. The ethanol extract showed minimal inhibitory concentration (MIC) values of 208 to 416 μg/mL, having the best antimicrobial effect on Paenibacillus larvae among all substances tested. Similarly, this leaf extract showed a significant antiparasitic activity on Varroa destructor, killing 50 % of mites 24 h after a 30-s exposure, and on Nosema ceranae, inhibiting the spore development in the midgut of adult bees ingesting 1 × 10(4) μg/mL of extract solution. Both ethanol extract and volatile extracts (essential oil, hydrolate, and its main component) did not cause lethal effects on adult honeybees. Thus, the absence of topical and oral toxicity of the ethanol extract on bees and the strong antimicrobial, microsporicidal, and miticidal effects registered in this study place this laurel extract as a promising integrated treatment of bee diseases and stimulates the search for other bioactive phytochemicals from plants.

  3. Inhibition of α-glucosidase activity by ethanolic extract of Melia azedarach L. leaves

    NASA Astrophysics Data System (ADS)

    Sulistiyani; Safithri, Mega; Puspita Sari, Yoana

    2016-01-01

    Development of α-glucosidase inhibitor derived from natural products is an opportunity for a more economic management of diabetes prevention. The objective of this study was to test the activity of α-glucosidase with or without potential inhibitor compounds. By in vitro method, α-glucosidase hydrolizes p-nitrophenyl-α-D-glucopiranoside to glucose and the yellow of p-nitrophenol which can be determined with spectrophotometry at 400 nm. The ability of ethanolic leaf extract of Melia azedarach L. as a-glucosidase inhibitor was compared with that of commercial acarbose (Glucobay®). Acarbose showed strong inhibitory activity against a-glucosidase with IC50 values of 2.154 µg/mL. The crude ethanolic leaf extract of M. azedarach, however, showed less inhibitory activity with IC50 value of 3, 444.114 µg/mL. Total phenolics of M. azedarach leaves EtOH extract showed 17.94 µg GAE/mg extract and flavonoids total compound of 9.55 µg QE/mg extract. Based on the published wide range of IC50 values of extracts reported as a-glucosidase inhibitor which were between 10, 000 ppm-0.66 ppm, our result suggests that extract of M.azedarach leaves is potential candidate for development of anti-hyperglycemic formulation.

  4. Resilience of the Nexus of Competitive Water Consumption between Human Society and Environment Development: Regime Shifts and Early Warning Signals

    NASA Astrophysics Data System (ADS)

    Li, Z.; Liu, P.; Feng, M.; Zhang, J.

    2017-12-01

    Based on the modeling of the water supply, power generation and environment (WPE) nexus by Feng et al. (2016), a refined theoretical model of competitive water consumption between human society and environment has been presented in this study, examining the role of technology advancement and social environmental awareness growth-induced pollution mitigation to the environment as a mechanism for the establishment and maintenance of the coexistence of both higher social water consumption and improved environment condition. By coupling environmental and social dynamics, both of which are represented by water consumption quantity, this study shows the possibility of sustainable situation of the social-environmental system when the benefit of technology offsets the side effect (pollution) of social development to the environment. Additionally, regime shifts could be triggered by gradually increased pollution rate, climate change-induced natural resources reduction and breakdown of the social environmental awareness. Therefore, in order to foresee the pending abrupt regime shifts of the system, early warning signals, including increasing variance and autocorrelation, have been examined when the system is undergoing stochastic disturbance. ADDIN EN.REFLIST Feng, M. et al., 2016. Modeling the nexus across water supply, power generation and environment systems using the system dynamics approach: Hehuang Region, China. J. Hydrol., 543: 344-359.

  5. Use of distributed water level and soil moisture data in the evaluation of the PUMMA periurban distributed hydrological model: application to the Mercier catchment, France

    NASA Astrophysics Data System (ADS)

    Braud, Isabelle; Fuamba, Musandji; Branger, Flora; Batchabani, Essoyéké; Sanzana, Pedro; Sarrazin, Benoit; Jankowfsky, Sonja

    2016-04-01

    Distributed hydrological models are used at best when their outputs are compared not only to the outlet discharge, but also to internal observed variables, so that they can be used as powerful hypothesis-testing tools. In this paper, the interest of distributed networks of sensors for evaluating a distributed model and the underlying functioning hypotheses is explored. Two types of data are used: surface soil moisture and water level in streams. The model used in the study is the periurban PUMMA (Peri-Urban Model for landscape Management, Jankowfsky et al., 2014), that is applied to the Mercier catchment (6.7 km2) a semi-rural catchment with 14% imperviousness, located close to Lyon, France where distributed water level (13 locations) and surface soil moisture data (9 locations) are available. Model parameters are specified using in situ information or the results of previous studies, without any calibration and the model is run for four years from January 1st 2007 to December 31st 2010 with a variable time step for rainfall and an hourly time step for reference evapotranspiration. The model evaluation protocol was guided by the available data and how they can be interpreted in terms of hydrological processes and constraints for the model components and parameters. We followed a stepwise approach. The first step was a simple model water balance assessment, without comparison to observed data. It can be interpreted as a basic quality check for the model, ensuring that it conserves mass, makes the difference between dry and wet years, and reacts to rainfall events. The second step was an evaluation against observed discharge data at the outlet, using classical performance criteria. It gives a general picture of the model performance and allows to comparing it to other studies found in the literature. In the next steps (steps 3 to 6), focus was made on more specific hydrological processes. In step 3, distributed surface soil moisture data was used to assess the relevance of the simulated seasonal soil water storage dynamics. In step 4, we evaluated the base flow generation mechanisms in the model through comparison with continuous water level data transformed into stream intermittency statistics. In step 5, the water level data was used again but at the event time scale, to evaluate the fast flow generation components through comparison of modelled and observed reaction and response times. Finally, in step 6, we studied correlation between observed and simulated reaction and response times and various characteristics of the rainfall events (rain volume, intensity) and antecedent soil moisture, to see if the model was able to reproduce the observed features as described in Sarrazin (2012). The results show that the model is able to represent satisfactorily the soil water storage dynamics and stream intermittency. On the other hand, the model does not reproduce the response times and the difference in response between forested and agricultural areas. References: Jankowfsky et al., 2014. Assessing anthropogenic influence on the hydrology of small peri-urban catchments: Development of the object-oriented PUMMA model by integrating urban and rural hydrological models. J. Hydrol., 517, 1056-1071 Sarrazin, B., 2012. MNT et observations multi-locales du réseau hydrographique d'un petit bassin versant rural dans une perspective d'aide à la modélisation hydrologique. Ecole doctorale Terre, Univers, Environnement. l'Institut National Polytechnique de Grenoble, 269 pp (in French).

  6. Movement of Water Through the Chalk Unsaturated zone

    NASA Astrophysics Data System (ADS)

    Butler, A.; Ireson, A.; Wheater, H.; Mathias, S.; Finch, J.

    2006-12-01

    Despite many decades study, quantification of water movement through the Chalk unsaturated zone has proved difficult, due to its particular properties. Chalk comprises a fine grained porous matrix intersected by a fracture network. In much of the unsaturated zone, for most of the time, matric potentials remain between -20 and -0.5 m. Thus the matrix is largely saturated by capillary action, and the fractures are largely de-watered. Therefore, debate has often focussed on the importance of the fractures, as compared with the matrix, for the movement of water. Recently, Mathias et al. (J Hydrol., in press) and Brouyère (J Contam Hydrol,82:195-219,2006) have (independently) proposed an Equivalent Continuum Model, ECM, for the Chalk. This assumes that the fractures can be treated as a porous medium and that the fracture and matrix domains can be treated as a single domain i.e. an equivalent continuum. This requires that the fractures and matrix are in pressure equilibrium, and whilst the theoretical basis for this assumption is reasonable, it has not been demonstrated empirically. In addition, Mathias et al. have demonstrated the importance of rainfall attenuation in the near surface weathered and soil zones of the Chalk for attenuating flow. As part of a national research initiative into groundwater dominated catchments, an extensive field monitoring programme has been implemented at two Chalk catchments in Berkshire (UK). This includes comprehensive soil moisture measurements (water content and matric potential), an extensive network of piezometers and observation wells measuring water table response, and the direct measurement of actual evaporation as well as standard meteorological variables, including rainfall. Using the Kosugi (WRR,32:2697-2703,1996) relationships for soil water retention and hydraulic conductivity a methodology for characterising vertical variation in hydraulic properties from competent chalk at depth through weathered rock to surface soil has been developed using data from one of the above catchments. The model was defined by nine parameters, five of which were identified a priori from observed soil moisture characteristic curves at various elevations, the remaining four by calibration of the numerical model to detailed time series datasets. Effects of parameter identifiability were explored using Monte Carlo analysis. Using a performance criterion based on fitting to matric potentials at a range of depths (from 20 cm to 4 m) over a calendar year, the set of acceptable results appears to support the ECM representation and indicates that fractures in the near- surface competent and weathered rock play an important role in the storage and release of groundwater recharge, whereas the rock matrix is crucial for its transmission to a water table tens of metres below. This conclusion has helped to resolve the debate on the respective roles of fractures and matrix in unsaturated water movement in the Chalk. Furthermore, the model simulations indicate that groundwater recharge can occur continually throughout the year. This helps to explain the apparently enhanced groundwater yields calculated during drought conditions compared with results obtained from pumping tests. It also indicates that current recharge models for the Chalk may need to be revised.

  7. Coupling rainfall observations and satellite soil moisture for predicting event soil loss in Central Italy

    NASA Astrophysics Data System (ADS)

    Todisco, Francesca; Brocca, Luca; Termite, Loris Francesco; Wagner, Wolfgang

    2015-04-01

    The accuracy of water soil loss prediction depends on the ability of the model to account for effects of the physical phenomena causing the output and the accuracy by which the parameters have been determined. The process based models require considerable effort to obtain appropriate parameter values and their failure to produce better results than achieved using the USLE/RUSLE model, encourages the use of the USLE/RUSLE model in roles of which it was not designed. In particular it is widely used in watershed models even at the event temporal scale. At hillslope scale, spatial variability in soil and vegetation result in spatial variations in soil moisture and consequently in runoff within the area for which soil loss estimation is required, so the modeling approach required to produce those estimates needs to be sensitive to those spatial variations in runoff. Some models include explicit consideration of runoff in determining the erosive stresses but this increases the uncertainty of the prediction due to the difficulty in parameterising the models also because the direct measures of surface runoff are rare. The same remarks are effective also for the USLE/RUSLE models including direct consideration of runoff in the erosivity factor (i.e. USLE-M by Kinnell and Risse, 1998, and USLE-MM by Bagarello et al., 2008). Moreover actually most of the rainfall-runoff models are based on the knowledge of the pre-event soil moisture that is a fundamental variable in the rainfall-runoff transformation. In addiction soil moisture is a readily available datum being possible to have easily direct pre-event measures of soil moisture using in situ sensors or satellite observations at larger spatial scale; it is also possible to derive the antecedent water content with soil moisture simulation models. The attempt made in the study is to use the pre-event soil moisture to account for the spatial variation in runoff within the area for which the soil loss estimates are required. More specifically the analysis was focused on the evaluation of the effectiveness of coupling modeled or satellite-derived soil moisture with USLE-derived models in predicting event unit soil loss at the plot scale in a silty-clay soil in Central Italy. To this end was used the database of the Masse experimental station developed considering for a given erosive event (an event yielding a measurable soil loss) the simultaneous measures of the total runoff amount, Qe (mm), and soil loss per unit area, Ae (Mg-ha-1) at plot scale and of the rainfall data required to derive the erosivity factor Re according to Wischmeiser and Smith (1978), with a MIT=6 h (Bagarello et al., 2013; Todisco et al., 2012). To the purpose of this investigation only data collected on the λ = 22 m long plots were considered: 63 erosive events in the period 2008-2013, 18 occurred during the dry period (from June to September) and the other 45 in the complementary period (wet period). The models tested are the USLE/RUSLE and some USLE-derived formulations in which the event erosivity factor, Re, is corrected by the antecedent soil moisture, θ, and powered to an exponent α > 0 (α =1: linear model; α ≠ 1: power model). Both soil moisture data the satellite retrieved (θ = θsat) and the estimates (θ = θest) of Soil Water Balance model (Brocca et al., 2011) were tested. The results have been compared with those obtained by the USLE/RUSLE, USLE-M and USLE-MM models coupled with a parsimonious rainfall-runoff model, MILc, (Brocca et al. 2011) for the prediction of runoff volume (that in these models is the term used to correct the erosivity factor Re). The results showed that: including direct consideration of antecedent soil moisture and runoff in the event rainfall-runoff factor of the RUSLE/USLE enhanced the capacity of the model to account for variations in event soil loss when soil moisture and runoff volume are measured or predicted reasonably well; the accuracy of the original USLE/RUSLE model was always the lowest; the accuracy in estimating the event soil loss of a models with erosivity factor that includes the estimated runoff is always overcome by at least one model that uses the antecedent soil moisture θ in the erosivity index; the power models generally, at Masse, work better than the linear. The more accurate models are that with the estimated antecedent soil moisture, θest, when all the database is used and with the satellite retrieved soil moisture, θsat, when only the wet periods' events are considered. In fact it was also verified that much of the inaccuracy of the tested models is due to summer rainfall events, probably because of the particular characteristics that the soil assumes in the dry period (superficial crusts causing higher runoff): in this cases, high soil losses are observed in association to low values of soil moisture, while the simulated runoff assume low values too, since they are based on the antecedent wetness conditions. Thus, the analyses were repeated excluding the summer events. As expected, the performance of all the models increases, but still the use of θ provides the best results. The results of the analysis open interesting scenarios in the use of USLE-derived models for the unit event soil loss estimation at large scale. In particular the use of the soil moisture to correct the rainfall erosivity factor acquires a great practical importance, since it is a relatively simple measurable data and moreover because remote sensing soil moisture data are widely available and useful in large-scale erosion assessment. Bagarello, V., Di Piazza, G. V., Ferro, V., Giordano, G., 2008. Predicting unit soil loss in Sicily, south Italy. Hydrol. Process. 22, 586-595. Bagarello, V., Ferro, V., Giordano, G., Mannocchi, F., Todisco, F., Vergni, L., 2013. Predicting event soil loss form bare plots at two Italian sites. Catena 109, 96-102. Brocca, L., Melone, F., Moramarco, T., 2011. Distributed rainfall-runoff modeling for flood frequency estimation and flood forecasting. Hydrol. Process. 25, 2801-2813. Kinnell, P. I. A., Risse, L. M., 1998. USLE-M: empirical modeling rainfall erosion through runoff and sediment concentration. Soil Sci. Soc. Am. J. 62, 1667-1672. Todisco, F., Vergni, L., Mannocchi, F., Bomba, C., 2012. Calibration of the soil loss measurement at the Masse experimental station. Catena 91, 4-9. Wischmeier, W. H., Smith, D. D., 1978. Predicting rainfall-erosion losses - A guide to conservation planning. Agriculture Handbook 537, United Stated Department of Agriculture.

  8. Soil water balance as affected by throughfall in gorse ( Ulex europaeus, L.) shrubland after burning

    NASA Astrophysics Data System (ADS)

    Soto, Benedicto; Diaz-Fierros, Francisco

    1997-08-01

    The role of fire in the hydrological behaviour of gorse shrub is studied from the point of view of its effects on vegetation cover and throughfall. In the first year after fire, throughfall represents about 88% of gross rainfall, whereas in unburnt areas it is 58%. Four years after fire, the throughfall coefficients are similar in burnt and unburnt plots (about 6096). The throughfall is not linearly related to vegetation cover because an increase in cover does not involve a proportional reduction in throughfall. The throughfall predicted by the two-parameter exponential model of Calder (1986, J. Hydrol., 88: 201-211) provides a good fit with the observed throughfall and the y value of the model reflects the evolution of throughfall rate. The soil moisture distribution is modified by fire owing to the increase of evaporation in the surface soil and the decrease of transpiration from deep soil layers. Nevertheless, the use of the old root system by sprouting vegetation leads to a soil water profile in which 20 months after the fire the soil water is similar in burnt and unburnt areas. Overall, soil moisture is higher in burnt plots than in unburnt plots. Surface runoff increases after a fire but does not entirely account for the increase in throughfall. Therefore the removal of vegetation cover in gorse scrub by fire mainly affects the subsurface water flows.

  9. CHANGES IN THE SHAPE AND SIZE OF BACTERIUM COLI AND BACILLUS MEGATHERIUM UNDER THE INFLUENCE OF BACTERIOPHAGE—A MOTION PHOTOMICROGRAPHIC ANALYSIS OF THE MECHANISM OF LYSIS

    PubMed Central

    Bayne-Jones, Stanhope; Sandholzer, Leslie A.

    1933-01-01

    This paper contains the records of a motion photomicrographic investigation of the lysis of Bact. coli and B. megatherium by bacteriophage. The bacteria mixed with bacteriophage were grown on moist nutrient agar in small culture chambers on the stage of a microscope in an incubator maintained at 37°C. The apparatus used permitted continuous inspection of the preparations. Photographs were made at the rates of 2 and 30 per minute and at the rate of 8 per second during the terminal stage of lysis of Bact. coli. The accurately timed films were studied by rapid projection and by the projection of single frames. Measurements of dimensions of cells, calculations of volumes, information on generations, generation times and duration spans are presented in the tables. Similar information on normal cultures grown and photographed in the same way is furnished for comparison. Groups of serial photographs are reproduced in the plates to illustrate the special features observed. These observations seem to us to warrant the following conclusions: 1. Enlargement or swelling of the cells of Bact. coli usually, but not always, precedes lysis. Some of the enlargement is an expression of increase of cell substance and is not altogether due to imbibition of water. Cells of early generations of Bact. coli enlarge to greater absolute and relative proportions than cells of later generations. Enlargement does not occur before lysis in B. megatherium. 2. The terminal stage of lysis of Bact. coli is explosive, occupying ½ to ⅞ second. The terminal stage of lysis of B. megatherium is a slow disintegrative process, extending over 2–10 minutes. 3. Bacteriophage inhibits fission of some cells, but does not stop the reproduction of other cells in contact with it. The genealogical records of six generations of cells of Bact. coli and of two generations of cells of B. megatherium indicate that bacteriophage may be transmitted through parents to the offspring which ultimately undergo lysis. 4. Bacteriophage spreads by contact through a group of cells and also along paths determined by genetical relationships. 5. A large amount of cellular debris remains after the lysis of the cells in both of these species of bacteria. This residue of material is in the form of irregularly shaped masses and granules. This material is not in solution at the time of lysis and appears not to be digested or hydrolized. 6. Theories of the mechanism of lysis are discussed. It is suggested that reduction of surface tension of the cells may be an important factor in the mechanism of lysis. PMID:19870131

  10. The Tiberias Basin salt deposits and their effects on lake salinity

    NASA Astrophysics Data System (ADS)

    Inbar, Nimrod; Rosenthal, Eliahu; Möller, Peter; Yellin-Dror, Annat; Guttman, Josef; Siebert, Christian; Magri, Fabien

    2015-04-01

    Lake Tiberias is situated in one of the pull-apart basins comprising the Dead Sea transform. The Tiberias basin extends along the northern boundary of the Lower Jordan Rift Valley (LJRV) which is known for its massive salt deposits, mostly at its southern end, at the Dead Sea basin. Nevertheless, prior to the drilling of Zemah-1 wildcat, drilled close to the southern shores of Lake Tiberias, the Tiberias Basin was considered rather shallow and free of salt deposits (Starinsky, 1974). In 1983, Zemah-1 wildcat penetrated 2.8 km thick sequence of sedimentary and magmatic rocks of which 980m are salt deposits (Marcus et al., 1984). Recent studies, including the presented geophysical investigations, lay out the mechanisms of salt deposition in the Tiberias basin and estimate its abundance. Supported by seismic data, our interpreted cross-sections display relatively thick salt deposits distributed over the entire basin. Since early days of hydrological research in the area, saline springs are known to exist at Lake Tiberias' surroundings. Water temperatures in some of the springs indicate their origin to be at depths of 2-3 km (Simon and Mero, 1992). In the last decade, several studies suggested that the salinity of springs may be attributed, at least partially, to the Zemah-1 salt deposits. Chemical justification was attributed to post-halite minerals which were thought to be present among those deposits. This hypothesis was never verified. Moreover, Möller et al. (2011) presented a calculation contradicting this theory. In addition to the geophysical investigations, numerical models of thermally driven flow, examine the possible fluid dynamics developing near salt deposits below the lake and their interactions with springs along the lakeshore (Magri et al., 2015). It is shown that leached halite is too heavy to reach the surface. However, salt diffusing from shallow salt crest may locally reach the western side of the lakeshore. References Magri, F., N. Inbar,C. Siebert, E. Rosenthal, J. Guttman and P. Möller (2015) Transient simulations of large-scale hydrogeological processes causing temperature and salinity anomalies in the Tiberias Basin, Journal of Hydrology, Volume 520, Pages 342-355, Marcus, E., Y. Slager, S. Ben-Zaken, and I. Y. Indik (1984), Zemah 1, Geological Complition ReportRep. 84/11, 108 pp, Oil Exploration (Investments) LTD, Tel Aviv. Möller, P., C. Siebert, S. Geyer, N. Inbar, E. Rosenthal, A. Flexer, and M. Zilberbrand (2011), Relationships of Brines in the Kinnarot Basin, Jordan-Dead Sea Rift Valley, Geofluids (doi: 10.1111/j.1468-8123.2011.00353.x). Simon, E., and F. Mero (1992), The salinization mechanism of Lake Kinneret, J. Hydrol., 138, 327-343. Starinsky, A. (1974), Relationship between Ca-Chloride Brines and Sedimentary Rocks in Israel, PhD thesis, 84 pp, Hebrew University, Jerusalem.

  11. Multi-temporal thermal analyses for submarine groundwater discharge (SGD) detection over large spatial scales in the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hennig, Hanna; Mallast, Ulf; Merz, Ralf

    2015-04-01

    Submarine groundwater discharge (SGD) sites act as important pathways for nutrients and contaminants that deteriorate marine ecosystems. In the Mediterranean it is estimated that 75% of freshwater input is contributed from karst aquifers. Thermal remote sensing can be used for a pre-screening of potential SGD sites in order to optimize field surveys. Although different platforms (ground-, air- and spaceborne) may serve for thermal remote sensing, the most cost-effective are spaceborne platforms (satellites) that likewise cover the largest spatial scale (>100 km per image). Therefore an automatized and objective approach that uses thermal satellite images from Landsat 7 and Landsat 8 was used to localize potential SGD sites on a large spatial scale. The method using descriptive statistic parameter specially range and standard deviation by (Mallast et al., 2014) was adapted to the Mediterranean Sea. Since the method was developed for the Dead Sea were satellite images with cloud cover are rare and no sea level change occurs through tidal cycles it was essential to adapt the method to a region where tidal cycles occur and cloud cover is more frequent . These adaptations include: (1) an automatic and adaptive coastline detection (2) include and process cloud covered scenes to enlarge the data basis, (3) implement tidal data in order to analyze low tide images as SGD is enhanced during these phases and (4) test the applicability for Landsat 8 images that will provide data in the future once Landsat 7 stops working. As previously shown, the range method shows more accurate results compared to the standard deviation. However, the result exclusively depends on two scenes (minimum and maximum) and is largely influenced by outliers. Counteracting on this drawback we developed a new approach. Since it is assumed that sea surface temperature (SST) is stabilized by groundwater at SGD sites, the slope of a bootstrapped linear model fitted to sorted SST per pixel would be less steep than the slope of the surrounding area, resulting in less influence through outliers and an equal weighting of all integrated scenes. Both methods could be used to detect SGD sites in the Mediterranean regardless to the discharge characteristics (diffuse and focused) exceptions are sites with deep emergences. Better results could be shown in bays compared to more exposed sites. Since the range of the SST is mostly influenced by maximum and minimum of the scenes, the slope approach can be seen as a more representative method using all scenes. References: Mallast, U., Gloaguen, R., Friesen, J., Rödiger, T., Geyer, S., Merz, R., Siebert, C., 2014. How to identify groundwater-caused thermal anomalies in lakes based on multi-temporal satellite data in semi-arid regions. Hydrol. Earth Syst. Sci. 18 (7), 2773-2787.

  12. How to integrate social sciences in hydrological research?

    NASA Astrophysics Data System (ADS)

    Seidl, Roman; Barthel, Roland

    2016-04-01

    The integration of interdisciplinary scientific and societal knowledge plays an increasing role in environmental science. Many scholars have long advocated for a joint effort of scientists from different disciplines (interdisciplinarity) to address the problems of the growing pressure on environmental and human systems (Nature, 2015). Such a need was also recognised for the hydrological sciences (HS) e.g. most recently by Vogel et al. (2015). Vibrant new approaches such as "Panta Rhei" (Montanari et al., 2013) and "Socio-Hydrology" (Sivapalan et al., 2012) discuss and propose options for the deeper involvement of hydrologists in socio-economic questions. While there is widespread consensus that coping with the challenges of global change in water resources requires more consideration of human activity, it still remains unclear which roles the social sciences and the humanities (SSH) should assume in this context. Despite the frequent usage of the term "interdisciplinarity" in related discussions, there seems to be a tendency towards assimilation of socio-economic aspects into hydrological research rather than an opening up for interdisciplinary collaboration with social scientists at eye level. The literature, however, remains vague with respect to the concepts of integration and does not allow confirming this assumed tendency. Moreover, the discourse within the hydrological research community on increasing the consideration of societal aspects in hydrological modelling and research is still led by a comparatively small group. In this contribution we highlight the most interesting results of a survey among hydrologists (with 184 respondents). The survey participants do not think that SSH is presently well integrated into hydrological research. They recognize the need for better cooperation between the two disciplines. When asked about ways to improve the status of cooperation, a higher status and acknowledgment of interdisciplinary research by colleagues do not seem to be major incentives for integrative work. The statement "Hydrologists themselves should consider and integrate socioeconomic aspects in their own work" was rated most often as the most preferable option. Our sample seems to be relatively biased toward those individuals who already have an interest or considerable experience in cooperating with researchers from the social sciences or the humanities. Such a bias might indicate that the general interest among hydrology academics in including socio-economic aspects in their research is not as high and widespread as it could and should be. References: Montanari, A. et al., 2013. "Panta Rhei-Everything Flows": Change in hydrology and society-The IAHS Scientific Decade 2013-2022. Hydrolog Sci J, 58(6): 1256-1275. Nature, 2015. Why interdisciplinary research matters. Nature, 525(7569): 305. Sivapalan, M., Savenije, H.H.G., Bloschl, G., 2012. Socio-hydrology: A new science of people and water. Hydrol Process, 26(8): 1270-1276. Vogel, R.M. et al., 2015. Hydrology: The interdisciplinary science of water. Water Resour Res, 51(6): 4409-4430.

  13. Diadenosine Tetraphosphate Hydrolase Is Part of the Transcriptional Regulation Network in Immunologically Activated Mast Cells▿

    PubMed Central

    Carmi-Levy, Irit; Yannay-Cohen, Nurit; Kay, Gillian; Razin, Ehud; Nechushtan, Hovav

    2008-01-01

    We previously discovered that microphthalmia transcription factor (MITF) and upstream stimulatory factor 2 (USF2) each forms a complex with its inhibitor histidine triad nucleotide-binding 1 (Hint-1) and with lysyl-tRNA synthetase (LysRS). Moreover, we showed that the dinucleotide diadenosine tetraphosphate (Ap4A), previously shown to be synthesized by LysRS, binds to Hint-1, and as a result the transcription factors are released from their suppression. Thus, transcriptional activity is regulated by Ap4A, suggesting that Ap4A is a second messenger in this context. For Ap4A to be unambiguously established as a second messenger, several criteria have to be fulfilled, including the presence of a metabolizing enzyme. Since several enzymes are able to hydrolize Ap4A, we provided here evidence that the “Nudix” type 2 gene product, Ap4A hydrolase, is responsible for Ap4A degradation following the immunological activation of mast cells. The knockdown of Ap4A hydrolase modulated Ap4A accumulation, resulting in changes in the expression of MITF and USF2 target genes. Moreover, our observations demonstrated that the involvement of Ap4A hydrolase in gene regulation is not a phenomenon exclusive to mast cells but can also be found in cardiac cells activated with the β-agonist isoproterenol. Thus, we have provided concrete evidence establishing Ap4A as a second messenger in the regulation of gene expression. PMID:18644867

  14. Stormflow-hydrograph separation based on isotopes: the thrill is gone--what's next?

    USGS Publications Warehouse

    Burns, Douglas A.

    2002-01-01

    Beginning in the 1970s, the promise of a new method for separatingstormflow hydrographs using18O,2H, and3Hprovedanirresistibletemptation, and was a vast improvement over graphical separationand solute tracer methods that were prevalent at the time. Eventu-ally, hydrologists realized that this new method entailed a plethoraof assumptions about temporal and spatial homogeneity of isotopiccomposition (many of which were commonly violated). Nevertheless,hydrologists forged ahead with dozens of isotope-based hydrograph-separation studies that were published in the 1970s and 1980s.Hortonian overland flow was presumed dead. By the late 1980s,the new isotope-based hydrograph separation technique had movedinto adolescence, accompanied by typical adolescent problems suchas confusion and a search for identity. As experienced hydrologistscontinued to use the isotope technique to study stormflow hydrol-ogy in forested catchments in humid climates, their younger peersfollowed obligingly—again and again. Was Hortonian overland flowreally dead and forgotten, though? What about catchments in whichpeople live and work? And what about catchments in dry climatesand the tropics? How useful were study results when several of theassumptions about the homogeneity of source waters were commonlyviolated? What if two components could not explain the variation ofisotopic composition measured in the stream during stormflow? Andwhat about uncertainty? As with many new tools, once the initialshine wore off, the limitations of the method became a concern—oneof which was that isotope-based hydrograph separations alone couldnot reveal much about the flow paths by which water arrives at astream channel during storms.

  15. User`s guide for UTCHEM-5.32m a three dimensional chemical flood simulator. Final report, September 30, 1992--December 31, 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-07-01

    UTCHEM is a three-dimensional chemical flooding simulator. The solution scheme is analogous to IMPES, where pressure is solved for implicitly, but concentrations rather than saturations are then solved for explicitly. Phase saturations and concentrations are then solved in a flash routine. An energy balance equation is solved explicitly for reservoir temperature. The energy balance equation includes heat flow between the reservoir and the over-and under-burden rocks. The major physical phenomena modeled in the simulator are: dispersion; dilution effects; adsorption; interfacial tension; relative permeability; capillary trapping; cation exchange; phase density; compositional phase viscosity; phase behavior (pseudoquaternary); aqueous reactions; partitioning of chemicalmore » species between oil and water; dissolution/precipitation; cation exchange reactions involving more than two cations; in-situ generation of surfactant from acidic crude oil; pH dependent adsorption; polymer properties: shear thinning viscosity; inaccessible pore volume; permeability reduction; adsorption; gel properties: viscosity; permeability reduction; adsorption; tracer properties: partitioning; adsorption; radioactive decay; reaction (ester hydrolization); temperature dependent properties: viscosity; tracer reaction; gel reactions The following options are available with UTCHEM: isothermal or non-isothermal conditions, a constant or variable time-step, constant pressure or constant rate well conditions, horizontal and vertical wells, and a radial or Cartesian geometry. Please refer to the dissertation {open_quotes}Field Scale Simulation of Chemical Flooding{close_quotes} by Naji Saad, August, 1989, for a more detailed discussion of the UTCHEM simulator and its formulation.« less

  16. Climate-methane cycle feedback in global climate model model simulations forced by RCP scenarios

    NASA Astrophysics Data System (ADS)

    Eliseev, Alexey V.; Denisov, Sergey N.; Arzhanov, Maxim M.; Mokhov, Igor I.

    2013-04-01

    Methane cycle module of the global climate model of intermediate complexity developed at the A.M. Obukhov Institute of Atmospheric Physics, Russian Academy of Sciences (IAP RAS CM) is extended by coupling with a detailed module for thermal and hydrological processes in soil (Deep Soil Simulator, (Arzhanov et al., 2008)). This is an important improvement with respect with the earlier IAP RAS CM version (Eliseev et al., 2008) which has employed prescribed soil hydrology to simulate CH4 emissions from soil. Geographical distribution of water inundated soil in the model was also improved by replacing the older Olson's ecosystem data base by the data based on the SCIAMACHY retrievals (Bergamaschi et al., 2007). New version of the IAP RAS CM module for methane emissions from soil is validated by using the simulation protocol adopted in the WETCHIMP (Wetland and Wetland CH4 Inter-comparison of Models Project). In addition, atmospheric part of the IAP RAS CM methane cycle is extended by temperature dependence of the methane life-time in the atmosphere in order to mimic the respective dependence of the atmospheric methane chemistry (Denisov et al., 2012). The IAP RAS CM simulations are performed for the 18th-21st centuries according with the CMIP5 protocol taking into account natural and anthropogenic forcings. The new IAP RAS CM version realistically reproduces pre-industrial and present-day characteristics of the global methane cycle including CH4 concentration qCH4 in the atmosphere and CH4 emissions from soil. The latter amounts 150 - 160 TgCH4-yr for the late 20th century and increases to 170 - 230 TgCH4-yr in the late 21st century. Atmospheric methane concentration equals 3900 ppbv under the most aggressive anthropogenic scenario RCP 8.5 and 1850 - 1980 ppbv under more moderate scenarios RCP 6.0 and RCP 4.5. Under the least aggressive scenario RCP 2.6 qCH4 reaches maximum 1730 ppbv in 2020s and declines afterwards. Climate change impact on the methane emissions from soil enhances build up of the methane stock in the atmosphere by 10 - 25% depending on anthropogenic scenario and time instant. In turn, decrease of methane life-time in the atmosphere suppresses this build up by 5 - 40%. The net effect is uncertain but small in terms of resulting additional greenhouse radiative forcing. This smallness is reflected in small additional (relative to the model version with both methane emissions from soil and methane life-time in the atmosphere fixed at their preindustrial values) near-surface warming which globally is not larger than 1 K, i.e, ˜ 4% of warming exhibited by the model version neglecting climate-methane cycle interaction. References [1] M.M. Arzhanov, P.F. Demchenko, A.V. Eliseev, and I.I. Mokhov. Simulation of characteristics of thermal and hydrologic soil regimes in equilibrium numerical experiments with a climate model of intermediate complexity. Izvestiya, Atmos. Ocean. Phys., 44(5):279-287, 2008. doi: 10.1134/S0001433808050022. [2] P. Bergamaschi, C. Frankenberg, J.F. Meirink, M. Krol, F. Dentener, T. Wagner, U. Platt, J.O. Kaplan, S. Körner, M. Heimann, E.J. Dlugokencky, and A. Goede. Satellite chartography of atmospheric methane from SCIAMACHY on board ENVISAT: 2. Evaluation based on inverse model simulations. J. Geophys. Res., 112(D2):D02304, 2007. doi: 10.1029/2006JD007268. [3] S.N. Denisov, A.V. Eliseev, and I.I. Mokhov. Climate change in the IAP RAS global model with interactive methane cycle under RCP anthropogenic scenarios. Rus. Meteorol. Hydrol., 2012. [submitted]. [4] A.V. Eliseev, I.I. Mokhov, M.M. Arzhanov, P.F. Demchenko, and S.N. Denisov. Interaction of the methane cycle and processes in wetland ecosystems in a climate model of intermediate complexity. Izvestiya, Atmos. Ocean. Phys., 44(2):139-152, 2008. doi: 10.1134/S0001433808020011.

  17. Getting a feel for parameters: using interactive parallel plots as a tool for parameter identification in the new rainfall-runoff model WALRUS

    NASA Astrophysics Data System (ADS)

    Brauer, Claudia; Torfs, Paul; Teuling, Ryan; Uijlenhoet, Remko

    2015-04-01

    Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS) to fill the gap between complex, spatially distributed models often used in lowland catchments and simple, parametric models which have mostly been developed for mountainous catchments (Brauer et al., 2014ab). This parametric rainfall-runoff model can be used all over the world in both freely draining lowland catchments and polders with controlled water levels. The open source model code is implemented in R and can be downloaded from www.github.com/ClaudiaBrauer/WALRUS. The structure and code of WALRUS are simple, which facilitates detailed investigation of the effect of parameters on all model variables. WALRUS contains only four parameters requiring calibration; they are intended to have a strong, qualitative relation with catchment characteristics. Parameter estimation remains a challenge, however. The model structure contains three main feedbacks: (1) between groundwater and surface water; (2) between saturated and unsaturated zone; (3) between catchment wetness and (quick/slow) flowroute division. These feedbacks represent essential rainfall-runoff processes in lowland catchments, but increase the risk of parameter dependence and equifinality. Therefore, model performance should not only be judged based on a comparison between modelled and observed discharges, but also based on the plausibility of the internal modelled variables. Here, we present a method to analyse the effect of parameter values on internal model states and fluxes in a qualitative and intuitive way using interactive parallel plotting. We applied WALRUS to ten Dutch catchments with different sizes, slopes and soil types and both freely draining and polder areas. The model was run with a large number of parameter sets, which were created using Latin Hypercube Sampling. The model output was characterised in terms of several signatures, both measures of goodness of fit and statistics of internal model variables (such as the percentage of rain water travelling through the quickflow reservoir). End users can then eliminate parameter combinations with unrealistic outcomes based on expert knowledge using interactive parallel plots. In these plots, for instance, ranges can be selected for each signature and only model runs which yield signature values in these ranges are highlighted. The resulting selection of realistic parameter sets can be used for ensemble simulations. C.C. Brauer, A.J. Teuling, P.J.J.F. Torfs, R. Uijlenhoet (2014a): The Wageningen Lowland Runoff Simulator (WALRUS): a lumped rainfall-runoff model for catchments with shallow groundwater, Geoscientific Model Development, 7, 2313-2332, www.geosci-model-dev.net/7/2313/2014/gmd-7-2313-2014.pdf C.C. Brauer, P.J.J.F. Torfs, A.J. Teuling, R. Uijlenhoet (2014b): The Wageningen Lowland Runoff Simulator (WALRUS): application to the Hupsel Brook catchment and Cabauw polder, Hydrology and Earth System Sciences, 18, 4007-4028, www.hydrol-earth-syst-sci.net/18/4007/2014/hess-18-4007-2014.pdf

  18. Understanding flood-induced water chemistry variability extracting temporal patterns with the LDA method

    NASA Astrophysics Data System (ADS)

    Aubert, A. H.; Tavenard, R.; Emonet, R.; De Lavenne, A.; Malinowski, S.; Guyet, T.; Quiniou, R.; Odobez, J.; Merot, P.; Gascuel-odoux, C.

    2013-12-01

    Studying floods has been a major issue in hydrological research for years, both in quantitative and qualitative hydrology. Stream chemistry is a mix of solutes, often used as tracers, as they originate from various sources in the catchment and reach the stream by various flow pathways. Previous studies (for instance (1)) hypothesized that stream chemistry reaction to a rainfall event is not unique but varies seasonally, and according to the yearly meteorological conditions. Identifying a typology of flood temporal chemical patterns is a way to better understand catchment processes at the flood and seasonal time scale. We applied a probabilistic model (Latent Dirichlet Allocation or LDA (2)) mining recurrent sequential patterns from a dataset of floods. A set of 472 floods was automatically extracted from a daily 12-year long record of nitrate, dissolved organic carbon, sulfate and chloride concentrations. Rainfall, discharge, water table depth and temperature are also considered. Data comes from a long-term hydrological observatory (AgrHys, western France) located at Kervidy-Naizin. From each flood, a document has been generated that is made of a set of "hydrological words". Each hydrological word corresponds to a measurement: it is a triplet made of the considered variable, the time at which the measurement is made (relative to the beginning of the flood), and its magnitude (that can be low, medium or high). The documents and the number of pattern to be mined are used as input data to the LDA algorithm. LDA relies on spotting co-occurrences (as an alternative to the more traditional study of correlation) between words that appear within the flood documents. It has two nice properties that are its ability to easily deal with missing data and its additive property that allows a document to be seen as a mixture of several flood patterns. The output of LDA is a set of patterns easily represented in graphics. These patterns correspond to typical reactions to rainfall events. The patterns themselves are carefully studied, as well as their repartition along the year and along the 12 years of the dataset. We would recommend the use of such model to any study based on patterns or signature extraction. It could be well suited to compare different geographical locations and analyzing the resulting different pattern distributions. (1) Aubert, A.H., Gascuel-Odoux, C., Gruau, G., Akkal, N., Faucheux, M., Fauvel, Y., Grimaldi, C., Hamon, Y., Jaffrezic, A., Lecoz Boutnik, M., Molenat, J., Petitjean, P., Ruiz, L., Merot, Ph. (2013), Solute transport dynamics in small, shallow groundwater-dominated agricultural catchments: insights from a high-frequency, multisolute 10 yr-long monitoring study. Hydrol. Earth Syst. Sci., 17(4): 1379-1391. (2) Aubert, A.H., Tavenard, R, Emonet, R., de Lavenne, A., Malinowski, S., Guyet, T., Quiniou, R., Odobez, J.-M., Merot, Ph., Gascuel-Odoux, C., submitted to WRR. Clustering with a probabilistic method newly applied in hydrology: application on flood events from water quality time-series.

  19. Effects of drilling fluids on soils and plants: I. Individual fluid components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, R.W.; Honarvar, S.; Hunsaker, B.

    1980-01-01

    The effects of 31 drilling fluid (drilling mud) components on the growth of green beans (Phaseolus vulgaris L., Tendergreen) and sweet corn (Zea may var. saccharata (Sturtev.) Bailey, Northrup King 199) were evaluated in greenhouse studies. Plants grew well in fertile Dagor silt loam soil (Cumulic Haploxeroll) when the soil was mixed with most soil-component mixtures at disposal proportions normally expected. Vinyl acetate and maleic acid polymer (VAMA) addition caused significantly increased growth at the 95% confidence level. No statistically significant depression of plant growth occurred at normal rates with asbestos, asphalt, barite, bentonite, calcium lignosulfonate, sodium polyacrylate, a modifiedmore » tannin, ethoxylated nonylphenol, a filming amine, gilsonite, a Xanthan gum, paraformaldehyde, a pipe dope, hydrolized polyacrylamide, sodium acid pyrophosphate, sodium carboxymethyl cellulose, sodium hydroxide added as pellets, and a sulfonated tall oil. Statistically significant reductions in plant yields (at the 95% confidence level) occurred at normal disposal rates with a long-chained aliphatic alcohol, sodium dichromate, diesel oil, guar gum, an iron chromelignosulfonate, lignite, a modified asphalt, a plant fibersynthetic fiber mixture, lignite, a nonfermenting starch, potassium chloride, pregelatinized starch, and sulfated triglyceride. Thirteen drilling fluid components added individually to a fluid base (water, bentonite, and barite) and then to soil were also tested for their effect on plant growth. Only the sulfated triglyceride (Torq-Trim) and the long-chain (high molecular weight) alcohol (Drillaid 405) caused no plant growth reductions at either rate added. The modified tannin (Desco) caused minimal reduction in bean growth only when added to soil in excess levels.« less

  20. Identification, Biochemical Characterization, and Subcellular Localization of Allantoate Amidohydrolases from Arabidopsis and Soybean1[W

    PubMed Central

    Werner, Andrea K.; Sparkes, Imogen A.; Romeis, Tina; Witte, Claus-Peter

    2008-01-01

    Allantoate amidohydrolases (AAHs) hydrolize the ureide allantoate to ureidoglycolate, CO2, and two molecules of ammonium. Allantoate degradation is required to recycle purine-ring nitrogen in all plants. Tropical legumes additionally transport fixed nitrogen via allantoin and allantoate into the shoot, where it serves as a general nitrogen source. AAHs from Arabidopsis (Arabidopsis thaliana; AtAAH) and from soybean (Glycine max; GmAAH) were cloned, expressed in planta as StrepII-tagged variants, and highly purified from leaf extracts. Both proteins form homodimers and release 2 mol ammonium/mol allantoate. Therefore, they can truly be classified as AAHs. The kinetic constants determined and the half-maximal activation by 2 to 3 μm manganese are consistent with allantoate being the in vivo substrate of manganese-loaded AAHs. The enzymes were strongly inhibited by micromolar concentrations of fluoride as well as by borate, and by millimolar concentrations of l-asparagine and l-aspartate but not d-asparagine. l-Asparagine likely functions as competitive inhibitor. An Ataah T-DNA mutant, unable to grow on allantoin as sole nitrogen source, is rescued by the expression of StrepII-tagged variants of AtAAH and GmAAH, demonstrating that both proteins are functional in vivo. Similarly, an allantoinase (aln) mutant is rescued by a tagged AtAln variant. Fluorescent fusion proteins of allantoinase and both AAHs localize to the endoplasmic reticulum after transient expression and in transgenic plants. These findings demonstrate that after the generation of allantoin in the peroxisome, plant purine degradation continues in the endoplasmic reticulum. PMID:18065556

  1. Dentinal tubule occluding capability of nano-hydroxyapatite; The in-vitro evaluation.

    PubMed

    Baglar, Serdar; Erdem, Umit; Dogan, Mustafa; Turkoz, Mustafa

    2018-04-29

    In this in-vitro study, the effectiveness of experimental pure nano-hydroxyapatite (nHAP) and 1%, 2%, and 3% F¯ doped nano-HAp on dentine tubule occlusion was investigated. And also, the cytotoxicity of materials used in the experiment was evaluated. Nano-HAp types were synthesized by the precipitation method. Forty dentin specimens were randomly divided into five groups of; 1-no treatment (control), 2-specimens treated with 10% pure nano-HAp and 3, 4, 5 specimens treated with 1%, 2%, and 3% F - doped 10% nano-HAp, respectively. To evaluate the effectiveness of the materials used; pH, FTIR, and scanning electron microscopy evaluations were performed before and after degredation in simulated body fluid. To determine cytotoxicity of the materials, MTT assay was performed. Statistical evaluations were performed with F and t tests. All of the nano-HAp materials used in this study built up an effective covering layer on the dentin surfaces even with plugs in tubules. It was found that this layer had also a resistance to degradation. None of the evaluated nano-HAp types were have toxicity. Fluoride doping showed a positive effect on physical and chemical stability until a critical value of 1% F - . The all evaluated nano-HAp types may be effectively used in dentin hypersensitivity treatment. The formed nano-HAp layers were seem to resistant to hydrolic deletion. The pure and 1% F - doped nano-HAp showed the highest biocompatibility thus it was assessed that pure and 1% F - doped materials may be used as an active ingredient in dentin hypersensitivity agents. © 2018 Wiley Periodicals, Inc.

  2. Evaluating the value of ENVISAT ASAR Data for the mapping and monitoring of peatland water table depths

    NASA Astrophysics Data System (ADS)

    Bechtold, Michel; Schlaffer, Stefan

    2015-04-01

    The Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT collected C-Band microwave backscatter data from 2005 to 2012. Backscatter in the C-Band depends to a large degree on the roughness and the moisture status of vegetation and soil surface with a penetration depth of ca. 3 cm. In wetlands with stable high water levels, the annual soil surface moisture dynamics are very distinct compared to the surrounding areas, which allows the monitoring of such environments with ASAR data (Reschke et al. 2012). Also in drained peatlands, moisture status of vegetation and soil surface strongly depends on water table depth due to high hydraulic conductivities of many peat soils in the low suction range (Dettmann et al. 2014). We hypothesize that this allows the characterization of water table depths with ASAR data. Here we analyze whether ASAR data can be used for the spatial and temporal estimation of water table depths in different peatlands (natural, near-natural, agriculturally-used and rewetted). Mapping and monitoring of water table depths is of crucial importance, e.g. for upscaling greenhouse gas emissions and evaluating the success of peatland rewetting projects. Here, ASAR data is analyzed with a new map of water table depths for the organic soils in Germany (Bechtold et al. 2014) as well as with a comprehensive data set of monitored peatland water levels from 1100 dip wells and 54 peatlands. ASAR time series from the years 2005-2012 with irregular temporal sampling intervals of 3-14 days were processed. Areas covered by snow were masked. Primary results about the accuracy of spatial estimates show significant correlations between long-term backscatter statistics and spatially-averaged water table depths extracted from the map at the resolution of the ASAR data. Backscatter also correlates with long-term averages of point-scale water table depth data of the monitoring wells. For the latter, correlation is highest between the dry reference backscatter values and summer mean water table depth. Using the boosted regression tree model of Bechtold et al., we evaluate whether the ASAR data can improve prediction accuracy and/or replace parts of ancillary data that is often not available in other countries. In the temporal domain primary results often show a better dependency between backscatter and water table depths compared to the spatial domain. For a variety of vegetation covers the temporal monitoring potential of ASAR data is evaluated at the level of annual water table depth statistics. Bechtold, M., Tiemeyer, B., Laggner, A., Leppelt, T., Frahm, E., and Belting, S., 2014. Large-scale regionalization of water table depth in peatlands optimized for greenhouse gas emission upscaling, Hydrol. Earth Syst. Sci., 18, 3319-3339. Dettmann, U., Bechtold, M., Frahm, E., Tiemeyer, B., 2014. On the applicability of unimodal and bimodal van Genuchten-Mualem based models to peat and other organic soils under evaporation conditions. Journal of Hydrology, 515, 103-115. Reschke, J., Bartsch, A., Schlaffer, S., Schepaschenko, D., 2012. Capability of C-Band SAR for Operational Wetland Monitoring at High Latitudes. Remote Sens. 4, 2923-2943.

  3. Shallow landslide stability computation using a distributed transient response model for susceptibility assessment and validation. A case study from Ribeira Quente valley (S. Miguel island, Azores)

    NASA Astrophysics Data System (ADS)

    Amaral, P.; Marques, R.; Zêzere, J. L.; Marques, F.; Queiroz, G.

    2009-04-01

    In the last 15 years, several heavy rainstorms have occurred in Povoação County (S. Miguel Island, Azores), namely in the Ribeira Quente Valley. These rainfall events have triggered hundreds of shallow landslides that killed tens of people and have been responsible for direct and indirect damages amounting to tens of millions of Euros. On the 6th March 2005 an intense rainfall episode, up to 160 mm of rain in less than 24 h, triggered several shallow landslides that caused 3 victims and damaged/blocked roads. The Ribeira Quente Valley has an area of about 9.5 km2 and is mainly constituted by pyroclastic materials (pumice ash and lapilli), that were produced by the Furnas Volcano explosive eruptions. To provide an assessment of slope-failure conditions for the 6th March 2005 rainfall event, it was applied a distributed transient response model for slope stability analysis. The adopted methodology is a modified version of Iversońs (2000) transient response model, which couple an infinite slope stability analysis with an analytic solution of the Richard's equation for vertical water infiltration in quasi-saturated soil. The validation was made on two different scales: (1) at a slope scale, using two distinct test sites where landslides were triggered; and (2) at the basin scale, using the entire landslide database and generalizing the modeling input parameters for the regional spatialization of results. At the slope scale, the obtained results were very accurate, and it was possible to predict the precise time of the slope failures. At the basin scale, the obtained results were very conservative, even though the model predicted all the observed landslide locations, in the 23.7% of the area classified as untable at the time of the slope failures. This methodology revealed to be a reasonable tool for landslide forecast for both temporal and spatial distributions, on both slope and regional scales. In the future, the model components will be integrated into a GIS based system that will publish the FS values to a WebGIS platform, based on near real time ground-based rainfall monitoring. This application will allow the evaluation of scenarios considering the variation of the pressure head response, related to transient rainfall regime. The resultant computational platform combined with regional empirical rainfall triggered landslides threshold (Marques et al. 2008) can be incorporated in a common server with the Regional Civil Protection for emergency planning purposes. This work is part of the project VOLCSOILRISK (Volcanic Soils Geotechnical Characterization for Landslide Risk Mitigation), supported by Direcção Regional da Ciência e Tecnologia do Governo Regional dos Açores. References: IVERSON, R.M. (2000) - Landslide triggering by rain infiltration. Water Resources Research 36, 1897-1910. MARQUES, R., ZÊZERE, J.L., TRIGO, R., GASPAR, J.L., TRIGO, I. (2008) - Rainfall patterns and critical values associated with landslides in Povoação County (São Miguel Island, Azores): relationships with the North Atlantic Oscillation. Hydrol. Process. 22, 478-494. DOI: 10.1002/hyp.6879.

  4. Determination of kinetic isotopic fractionation of water during bare soil evaporation

    NASA Astrophysics Data System (ADS)

    Quade, Maria; Brüggemann, Nicolas; Graf, Alexander; Rothfuss, Youri

    2017-04-01

    A process-based understanding of the water cycle in the atmosphere is important for improving meteorological and hydrological forecasting models. Usually only net fluxes of evapotranspiration - ET are measured, while land-surface models compute their raw components evaporation -E and transpiration -T. Isotopologues can be used as tracers to partition ET, but this requires knowledge of the isotopic kinetic fractionation factor (αK) which impacts the stable isotopic composition of water pools (e.g., soil and plant waters) during phase change and vapor transport by soil evaporation and plant transpiration. It is defined as a function of the ratio of the transport resistances in air of the less to the most abundant isotopologue. Previous studies determined αK for free evaporating water (Merlivat, 1978) or bare soil evaporation (Braud et al. 2009) at only low temporal resolution. The goal of this study is to provide estimates at higher temporal resolution. We performed a soil evaporation laboratory experiment to determine the αK by applying the Craig and Gordon (1965) model. A 0.7 m high column (0.48 m i.d.) was filled with silt loam (20.1 % sand, 14.9 % loam, 65 % silt) and saturated with water of known isotopic composition. Soil volumetric water content, temperature and the isotopic composition (δ) of the soil water vapor were measured at six different depths. At each depth microporous polypropylene tubing allowed the sampling of soil water vapor and the measurement of its δ in a non-destructive manner with high precision and accuracy as detailed in Rothfuss et al. (2013). In addition, atmospheric water vapor was sampled at seven different heights up to one meter above the surface for isotopic analysis. Results showed that soil and atmospheric δ profiles could be monitored at high temporal and vertical resolutions during the course of the experiment. αK could be calculated by using an inverse modeling approach and the Keeling (1958) plot method at high temporal resolution over a long period. We observed an increasing δ in the evaporating water vapor due to more enriched surface water. This leads to a higher transport resistances and an increasing αK. References Braud, I., Bariac, T., Biron, P., and Vauclin, M.: Isotopic composition of bare soil evaporated water vapor. Part II: Modeling of RUBIC IV experimental results, J. Hydrol., 369, 17-29. Craig, H. et al., 1965. Deuterium and oxygen 18 variations in the ocean and marine atmosphere. In: E. Tongiogi (Editor), Stable Isotopes in Oceanographic Studies and Paleotemperatures. V. Lishi, Spoleto, Italy, pp. 9-130. Keeling, C. D.: The Concentration and Isotopic Abundances of Atmospheric Carbon Dioxide in Rural Areas, Geochim. Cosmochim. Acta, 13, 322-334. Merlivat, L., 1978. Molecular Diffusivities of H216O, HD16O, and H218O in Gases. J Chem Phys, 69, 2864-2871. Rothfuss, Y. et al., 2013. Monitoring water stable isotopic composition in soils using gas-permeable tubing and infrared laser absorption spectroscopy. Water Resour. Res., 49, 1-9.

  5. Evaluation of the Ross fast solution of Richards’ equation in unfavourable conditions for standard finite element methods

    NASA Astrophysics Data System (ADS)

    Crevoisier, David; Chanzy, André; Voltz, Marc

    2009-06-01

    Ross [Ross PJ. Modeling soil water and solute transport - fast, simplified numerical solutions. Agron J 2003;95:1352-61] developed a fast, simplified method for solving Richards' equation. This non-iterative 1D approach, using Brooks and Corey [Brooks RH, Corey AT. Hydraulic properties of porous media. Hydrol. papers, Colorado St. Univ., Fort Collins; 1964] hydraulic functions, allows a significant reduction in computing time while maintaining the accuracy of the results. The first aim of this work is to confirm these results in a more extensive set of problems, including those that would lead to serious numerical difficulties for the standard numerical method. The second aim is to validate a generalisation of the Ross method to other mathematical representations of hydraulic functions. The Ross method is compared with the standard finite element model, Hydrus-1D [Simunek J, Sejna M, Van Genuchten MTh. The HYDRUS-1D and HYDRUS-2D codes for estimating unsaturated soil hydraulic and solutes transport parameters. Agron Abstr 357; 1999]. Computing time, accuracy of results and robustness of numerical schemes are monitored in 1D simulations involving different types of homogeneous soils, grids and hydrological conditions. The Ross method associated with modified Van Genuchten hydraulic functions [Vogel T, Cislerova M. On the reliability of unsaturated hydraulic conductivity calculated from the moisture retention curve. Transport Porous Media 1988;3:1-15] proves in every tested scenario to be more robust numerically, and the compromise of computing time/accuracy is seen to be particularly improved on coarse grids. Ross method run from 1.25 to 14 times faster than Hydrus-1D.

  6. Calibrating a Rainfall-Runoff and Routing Model for the Continental United States

    NASA Astrophysics Data System (ADS)

    Jankowfsky, S.; Li, S.; Assteerawatt, A.; Tillmanns, S.; Hilberts, A.

    2014-12-01

    Catastrophe risk models are widely used in the insurance industry to estimate the cost of risk. The models consist of hazard models linked to vulnerability and financial loss models. In flood risk models, the hazard model generates inundation maps. In order to develop country wide inundation maps for different return periods a rainfall-runoff and routing model is run using stochastic rainfall data. The simulated discharge and runoff is then input to a two dimensional inundation model, which produces the flood maps. In order to get realistic flood maps, the rainfall-runoff and routing models have to be calibrated with observed discharge data. The rainfall-runoff model applied here is a semi-distributed model based on the Topmodel (Beven and Kirkby, 1979) approach which includes additional snowmelt and evapotranspiration models. The routing model is based on the Muskingum-Cunge (Cunge, 1969) approach and includes the simulation of lakes and reservoirs using the linear reservoir approach. Both models were calibrated using the multiobjective NSGA-II (Deb et al., 2002) genetic algorithm with NLDAS forcing data and around 4500 USGS discharge gauges for the period from 1979-2013. Additional gauges having no data after 1979 were calibrated using CPC rainfall data. The model performed well in wetter regions and shows the difficulty of simulating areas with sinks such as karstic areas or dry areas. Beven, K., Kirkby, M., 1979. A physically based, variable contributing area model of basin hydrology. Hydrol. Sci. Bull. 24 (1), 43-69. Cunge, J.A., 1969. On the subject of a flood propagation computation method (Muskingum method), J. Hydr. Research, 7(2), 205-230. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on evolutionary computation, 6(2), 182-197.

  7. Regional frequency analysis of extreme rainfall for the Baltimore Metropolitan region based on stochastic storm transposition

    NASA Astrophysics Data System (ADS)

    Zhou, Z.; Smith, J. A.; Yang, L.; Baeck, M. L.; Wright, D.; Liu, S.

    2017-12-01

    Regional frequency analyses of extreme rainfall are critical for development of engineering hydrometeorology procedures. In conventional approaches, the assumptions that `design storms' have specified time profiles and are uniform in space are commonly applied but often not appropriate, especially over regions with heterogeneous environments (due to topography, water-land boundaries and land surface properties). In this study, we present regional frequency analyses of extreme rainfall for Baltimore study region combining storm catalogs of rainfall fields derived from weather radar and stochastic storm transposition (SST, developed by Wright et al., 2013). The study region is Dead Run, a small (14.3 km2) urban watershed, in the Baltimore Metropolitan region. Our analyses build on previous empirical and modeling studies showing pronounced spatial heterogeneities in rainfall due to the complex terrain, including the Chesapeake Bay to the east, mountainous terrain to the west and urbanization in this region. We expand the original SST approach by applying a multiplier field that accounts for spatial heterogeneities in extreme rainfall. We also characterize the spatial heterogeneities of extreme rainfall distribution through analyses of rainfall fields in the storm catalogs. We examine the characteristics of regional extreme rainfall and derive intensity-duration-frequency (IDF) curves using the SST approach for heterogeneous regions. Our results highlight the significant heterogeneity of extreme rainfall in this region. Estimates of IDF show the advantages of SST in capturing the space-time structure of extreme rainfall. We also illustrate application of SST analyses for flood frequency analyses using a distributed hydrological model. Reference: Wright, D. B., J. A. Smith, G. Villarini, and M. L. Baeck (2013), Estimating the frequency of extreme rainfall using weather radar and stochastic storm transposition, J. Hydrol., 488, 150-165.

  8. Studies of metabolic pathways of trimebutine by simultaneous administration of trimebutine and its deuterium-labeled metabolite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miura, Y.; Chishima, S.; Takeyama, S.

    1989-07-01

    Trimebutine maleate (I), (+-)-2-dimethylamino-2-phenylbutyl 3,4,5-trimethoxybenzoate hydrogen maleate, and a deuterium-labeled sample of its hydrolyzed metabolite, 2-dimethylamino-2-phenylbutanol-d3 (II-d3), were simultaneously administered to experimental animals at an oral dose of 10 or 50 mumol/kg, and distribution ratios of the two alternative initial metabolic steps, i.e., ester hydrolysis and N-demethylation, were estimated by determining the composition of the urinary alcohol-moiety metabolites, II, and its mono- and di-demethylated metabolites, III and IV, by GC/MS. In dogs, the order of quantities of the metabolites from II-d3 was II much greater than III much greater than IV, showing predominance of conjugation over N-demethylation. However, this ordermore » was reversed when the amounts of the metabolites from I were compared, indicating that I was preferentially metabolized by N-demethylation followed by ester hydrolysis and conjugation in this order. In rats, a considerable proportion of I was presumed to be metabolized by ester hydrolysis before N-demethylation. In in vitro experiments employing the liver microsomes and homogenates of liver and small intestine from rats and dogs, it was found that both ester-hydrolizing and N-demethylating activities were higher in rats than in dogs, and the conjugating activity was higher in dogs than in rats. It was also found that I, having a high lipophilicity, was more susceptible to N-demethylation than less lipophilic II. These results from the in vitro experiments could account for the species differences in the distribution ratio of the metabolic pathways of I in vivo.« less

  9. Peptidase modulation of vasoactive intestinal peptide pulmonary relaxation in tracheal superfused guinea pig lungs.

    PubMed Central

    Lilly, C M; Martins, M A; Drazen, J M

    1993-01-01

    The effects of enzyme inhibitors on vasoactive intestinal peptide (VIP)-induced decreases in airway opening pressure (PaO) and VIP-like immunoreactivity (VIP-LI) recovery were studied in isolated tracheal superfused guinea pig lungs. In the absence of inhibitors, VIP 0.38 (95% CI 0.33-0.54) nmol/kg animal, resulted in a 50% decrease in PaO and 33% of a 1 nmol/kg VIP dose was recovered as intact VIP. In the presence of two combinations of enzyme inhibitors, SCH 32615 (S, 10 microM) and aprotinin (A, 500 tyrpsin inhibitor units [TIU]/kg) or S and soybean trypsin inhibitor (T, 500 TIU/kg), VIP caused a significantly greater decrease in PaO and greater quantities of VIP were recovered from lung effluent (both P < 0.001). The addition of captopril, (3 microM), leupeptin (4 microM), or bestatin (1 microM) failed to further increase pulmonary relaxation or recovery of VIP-LI. When given singly, A, T, and S did not augment the effects or recovery of VIP. The efficacy of S (a specific inhibitor of neutral endopeptidase [NEP]) and A and T (serine protease inhibitors) thus implicated NEP and at least one serine protease as primary modulators of VIP activity in the guinea pig lung. We sought to corroborate this finding by characterizing the predominant amino acid sites at which VIP is hydrolized in the lung. When [mono(125I)iodo-Tyr10]VIP was offered to the lung, in the presence and absence of the active inhibitors, cleavage products consistent with activity by NEP and a tryptic enzyme were recovered. These data demonstrate that NEP and a peptidase with an inhibitor profile and cleavage pattern compatible with a tryptic enzyme inactivate VIP in a physiologically competitive manner. PMID:7678603

  10. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  11. Extensible packet processing architecture

    DOEpatents

    Robertson, Perry J.; Hamlet, Jason R.; Pierson, Lyndon G.; Olsberg, Ronald R.; Chun, Guy D.

    2013-08-20

    A technique for distributed packet processing includes sequentially passing packets associated with packet flows between a plurality of processing engines along a flow through data bus linking the plurality of processing engines in series. At least one packet within a given packet flow is marked by a given processing engine to signify by the given processing engine to the other processing engines that the given processing engine has claimed the given packet flow for processing. A processing function is applied to each of the packet flows within the processing engines and the processed packets are output on a time-shared, arbitered data bus coupled to the plurality of processing engines.

  12. Processed and ultra-processed foods are associated with lower-quality nutrient profiles in children from Colombia.

    PubMed

    Cornwell, Brittany; Villamor, Eduardo; Mora-Plazas, Mercedes; Marin, Constanza; Monteiro, Carlos A; Baylin, Ana

    2018-01-01

    To determine if processed and ultra-processed foods consumed by children in Colombia are associated with lower-quality nutrition profiles than less processed foods. We obtained information on sociodemographic and anthropometric variables and dietary information through dietary records and 24 h recalls from a convenience sample of the Bogotá School Children Cohort. Foods were classified into three categories: (i) unprocessed and minimally processed foods, (ii) processed culinary ingredients and (iii) processed and ultra-processed foods. We also examined the combination of unprocessed foods and processed culinary ingredients. Representative sample of children from low- to middle-income families in Bogotá, Colombia. Children aged 5-12 years in 2011 Bogotá School Children Cohort. We found that processed and ultra-processed foods are of lower dietary quality in general. Nutrients that were lower in processed and ultra-processed foods following adjustment for total energy intake included: n-3 PUFA, vitamins A, B12, C and E, Ca and Zn. Nutrients that were higher in energy-adjusted processed and ultra-processed foods compared with unprocessed foods included: Na, sugar and trans-fatty acids, although we also found that some healthy nutrients, including folate and Fe, were higher in processed and ultra-processed foods compared with unprocessed and minimally processed foods. Processed and ultra-processed foods generally have unhealthy nutrition profiles. Our findings suggest the categorization of foods based on processing characteristics is promising for understanding the influence of food processing on children's dietary quality. More studies accounting for the type and degree of food processing are needed.

  13. Dynamic control of remelting processes

    DOEpatents

    Bertram, Lee A.; Williamson, Rodney L.; Melgaard, David K.; Beaman, Joseph J.; Evans, David G.

    2000-01-01

    An apparatus and method of controlling a remelting process by providing measured process variable values to a process controller; estimating process variable values using a process model of a remelting process; and outputting estimated process variable values from the process controller. Feedback and feedforward control devices receive the estimated process variable values and adjust inputs to the remelting process. Electrode weight, electrode mass, electrode gap, process current, process voltage, electrode position, electrode temperature, electrode thermal boundary layer thickness, electrode velocity, electrode acceleration, slag temperature, melting efficiency, cooling water temperature, cooling water flow rate, crucible temperature profile, slag skin temperature, and/or drip short events are employed, as are parameters representing physical constraints of electroslag remelting or vacuum arc remelting, as applicable.

  14. On Intelligent Design and Planning Method of Process Route Based on Gun Breech Machining Process

    NASA Astrophysics Data System (ADS)

    Hongzhi, Zhao; Jian, Zhang

    2018-03-01

    The paper states an approach of intelligent design and planning of process route based on gun breech machining process, against several problems, such as complex machining process of gun breech, tedious route design and long period of its traditional unmanageable process route. Based on gun breech machining process, intelligent design and planning system of process route are developed by virtue of DEST and VC++. The system includes two functional modules--process route intelligent design and its planning. The process route intelligent design module, through the analysis of gun breech machining process, summarizes breech process knowledge so as to complete the design of knowledge base and inference engine. And then gun breech process route intelligently output. On the basis of intelligent route design module, the final process route is made, edited and managed in the process route planning module.

  15. Gasoline from coal in the state of Illinois: feasibility study. Volume I. Design. [KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-01-01

    Volume 1 describes the proposed plant: KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process, and also with ancillary processes, such as oxygen plant, shift process, RECTISOL purification process, sulfur recovery equipment and pollution control equipment. Numerous engineering diagrams are included. (LTN)

  16. Watershed modeling tools and data for prognostic and diagnostic

    NASA Astrophysics Data System (ADS)

    Chambel-Leitao, P.; Brito, D.; Neves, R.

    2009-04-01

    When eutrophication is considered an important process to control it can be accomplished reducing nitrogen and phosphorus losses from both point and nonpoint sources and helping to assess the effectiveness of the pollution reduction strategy. HARP-NUT guidelines (Guidelines on Harmonized Quantification and Reporting Procedures for Nutrients) (Borgvang & Selvik, 2000) are presented by OSPAR as the best common quantification and reporting procedures for calculating the reduction of nutrient inputs. In 2000, OSPAR HARP-NUT guidelines on a trial basis. They were intended to serve as a tool for OSPAR Contracting Parties to report, in a harmonized manner, their different commitments, present or future, with regard to nutrients under the OSPAR Convention, in particular the "Strategy to Combat Eutrophication". HARP-NUT Guidelines (Borgvang and Selvik, 2000; Schoumans, 2003) were developed to quantify and report on the individual sources of nitrogen and phosphorus discharges/losses to surface waters (Source Orientated Approach). These results can be compared to nitrogen and phosphorus figures with the total riverine loads measured at downstream monitoring points (Load Orientated Approach), as load reconciliation. Nitrogen and phosphorus retention in river systems represents the connecting link between the "Source Orientated Approach" and the "Load Orientated Approach". Both approaches are necessary for verification purposes and both may be needed for providing the information required for the various commitments. Guidelines 2,3,4,5 are mainly concerned with the sources estimation. They present a set of simple calculations that allow the estimation of the origin of loads. Guideline 6 is a particular case where the application of a model is advised, in order to estimate the sources of nutrients from diffuse sources associated with land use/land cover. The model chosen for this was SWAT (Arnold & Fohrer, 2005) model because it is suggested in the guideline 6 and because it's widely used in the world. Watershed models can be characterized by the high number of processes associated simulated. The estimation of these processes is also data intensive, requiring data on topography, land use / land cover, agriculture practices, soil type, precipitation, temperature, relative humidity, wind and radiation. Every year new data is being made available namely by satellite, that has allow to improve the quality of model input and also the calibration of the models (Galvão et. al, 2004b). Tools to cope with the vast amount of data have been developed: data formatting, data retrieving, data bases, metadata bases. The high number of processes simulated in watershed models makes them very wide in terms of output. The SWAT model outputs were modified to produce MOHID compliant result files (time series and HDF). These changes maintained the integrity of the original model, thus guarantying that results remain equal to the original version of SWAT. This allowed to output results in MOHID format, thus making it possible to immediately process it with MOHID visualization and data analysis tools (Chambel-Leitão et. al 2007; Trancoso et. al, 2009). Besides SWAT was modified to produce results files in HDF5 format, this allows the visualization of watershed properties (modeled by SWAT) in animated maps using MOHID GIS. The modified version of SWAT described here has been applied to various national and European projects. Results of the application of this modified version of SWAT to estimate hydrology and nutrients loads to estuaries and water bodies will be shown (Chambel-Leitão, 2008; Yarrow & Chambel-Leitão 2008; Chambel-Leitão et. al 2008; Yarrow & P. Chambel-Leitão, 2007; Yarrow & P. Chambel-Leitão, 2007; Coelho et. al., 2008). Keywords: Watershed models, SWAT, MOHID LAND, Hydrology, Nutrient Loads Arnold, J. G. and Fohrer, N. (2005). SWAT2000: current capabilities and research opportunities in applied watershed modeling. Hydrol. Process. 19, 563-572 Borgvang, S-A. & Selvik, J.S., 2000, eds. Development of HARP Guidelines - Harmonised Quantification and Reporting Procedure for Nutrients. SFT Report 1759/2000. ISBN 82-7655-401-6. 179 pp. Chambel-Leitão P. (2008) Load and flow estimation: HARP-NUT guidelines and SWAT model description. In Perspectives on Integrated Coastal Zone Management in South America R Neves, J Baretta & M Mateus (eds.). IST Press, Lisbon, Portugal. (ISBN: 978-972-8469-74-0) Chambel-Leitão P. Sampaio. A., Almeida, P. (2008) Load and flow estimation in Santos watersheds. In Perspectives on Integrated Coastal Zone Management in South America R Neves, J Baretta & M Mateus (eds.). IST Press, Lisbon, Portugal. (ISBN: 978-972-8469-74-0) Chambel-Leitão P., F. Braunschweig, L. Fernandes, R. Neves, P. Galvão. (2007) Integration of MOHID model and tools with SWAT model, submitted to the Proceedings of the, 4th International SWAT Conference, July 2-6 2007. Coelho H., Silva A., P. Chambel-Leitão, Obermann M. (2008) On The Origin Of Cyanobacteria Blooms In The Enxoé Reservoir. 13th World Water Congress, Montpellier, France Galvão P., Chambel-Leitão, P., P. Leitão, R. Neves. (2004a) A different approach to the modified Picard method for water flow in variably saturated media. Computational Methods in Water Resources. Chapel Hill, North Carolina USA Galvão P., Neves R., Silva A., Chambel-Leitão P. & F. Braunchweig (2004b) Integrated Watershed Modeling. Proceedings of MERIS User Workshop ESA-ESRIN, Frascati, Italy May 2004. Neves R., Galvao P., Braunschewig F.Chambel-Leitão P. (2007) New Approaches to Integrated Watershed Modeling. Proceedings of SPS (NFA) 5th Workshop on Sustainable Use And Development Of Watersheds For Human Security And Peace October 22-26, 2007 Istanbul, TURKEY Schoumans, O.F. & Silgram, M. (eds.), 2003. Review and literature evaluation of Quantification Tools for the assessment of nutrient losses at catchment scale. EUROHARP report 1-2003, NIVA report SNO 4739-2003, ISBN 82-557-4411-5 Trancoso, R., F. Braunschweig, Chambel-Leitão P., Neves, R., Obermann, M. (2009) An advanced modelling tool for simulating complex river systems. Accepted for publication in Journal of Total Environment. Yarrow M., Chambel-Leitão P. (2006) Calibration of the SWAT model to the Aysén basin of the Chilean Patagonia: Challenges and Lessons. Proceedings of the Watershed Management to Meet Water Quality Standards and TMDLS (Total Maximum Daily Load) 10-14 March 2007, San Antonio, Texas 701P0207. Yarrow M., Chambel-Leitão P.. (2007) Simulating Nothfagus forests in the Chilean Patagonia: a test and analysis of tree growth and nutrient cycling in swat. Submited to the Proceedings of the , 4th International SWAT Conference July 2-6 2007. Yarrow, M., Chambel-Leitão P. (2008) Estimation of loads in the Aysén Basin of the Chilean Patagonia: SWAT model and Harp-Nut guidelines. In Perspectives on Integrated Coastal Zone Management in South America R Neves, J Baretta & M Mateus (eds.). IST Press, Lisbon, Portugal. (ISBN: 978-972-8469-74-0)

  17. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  18. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  19. Situation awareness acquired from monitoring process plants - the Process Overview concept and measure.

    PubMed

    Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd

    2016-07-01

    We introduce Process Overview, a situation awareness characterisation of the knowledge derived from monitoring process plants. Process Overview is based on observational studies of process control work in the literature. The characterisation is applied to develop a query-based measure called the Process Overview Measure. The goal of the measure is to improve coupling between situation and awareness according to process plant properties and operator cognitive work. A companion article presents the empirical evaluation of the Process Overview Measure in a realistic process control setting. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA based on data collected by process experts. Practitioner Summary: The Process Overview Measure is a query-based measure for assessing operator situation awareness from monitoring process plants in representative settings.

  20. 43 CFR 2804.19 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false How will BLM process my Processing... process my Processing Category 6 application? (a) For Processing Category 6 applications, you and BLM must enter into a written agreement that describes how BLM will process your application. The final agreement...

  1. 43 CFR 2804.19 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false How will BLM process my Processing... process my Processing Category 6 application? (a) For Processing Category 6 applications, you and BLM must enter into a written agreement that describes how BLM will process your application. The final agreement...

  2. Process Correlation Analysis Model for Process Improvement Identification

    PubMed Central

    Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data. PMID:24977170

  3. Process correlation analysis model for process improvement identification.

    PubMed

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  4. Cleanliness of Ti-bearing Al-killed ultra-low-carbon steel during different heating processes

    NASA Astrophysics Data System (ADS)

    Guo, Jian-long; Bao, Yan-ping; Wang, Min

    2017-12-01

    During the production of Ti-bearing Al-killed ultra-low-carbon (ULC) steel, two different heating processes were used when the converter tapping temperature or the molten steel temperature in the Ruhrstahl-Heraeus (RH) process was low: heating by Al addition during the RH decarburization process and final deoxidation at the end of the RH decarburization process (process-I), and increasing the oxygen content at the end of RH decarburization, heating and final deoxidation by one-time Al addition (process-II). Temperature increases of 10°C by different processes were studied; the results showed that the two heating processes could achieve the same heating effect. The T.[O] content in the slab and the refining process was better controlled by process-I than by process-II. Statistical analysis of inclusions showed that the numbers of inclusions in the slab obtained by process-I were substantially less than those in the slab obtained by process-II. For process-I, the Al2O3 inclusions produced by Al added to induce heating were substantially removed at the end of decarburization. The amounts of inclusions were substantially greater for process-II than for process-I at different refining stages because of the higher dissolved oxygen concentration in process-II. Industrial test results showed that process-I was more beneficial for improving the cleanliness of molten steel.

  5. Application of agent-based system for bioprocess description and process improvement.

    PubMed

    Gao, Ying; Kipling, Katie; Glassey, Jarka; Willis, Mark; Montague, Gary; Zhou, Yuhong; Titchener-Hooker, Nigel J

    2010-01-01

    Modeling plays an important role in bioprocess development for design and scale-up. Predictive models can also be used in biopharmaceutical manufacturing to assist decision-making either to maintain process consistency or to identify optimal operating conditions. To predict the whole bioprocess performance, the strong interactions present in a processing sequence must be adequately modeled. Traditionally, bioprocess modeling considers process units separately, which makes it difficult to capture the interactions between units. In this work, a systematic framework is developed to analyze the bioprocesses based on a whole process understanding and considering the interactions between process operations. An agent-based approach is adopted to provide a flexible infrastructure for the necessary integration of process models. This enables the prediction of overall process behavior, which can then be applied during process development or once manufacturing has commenced, in both cases leading to the capacity for fast evaluation of process improvement options. The multi-agent system comprises a process knowledge base, process models, and a group of functional agents. In this system, agent components co-operate with each other in performing their tasks. These include the description of the whole process behavior, evaluating process operating conditions, monitoring of the operating processes, predicting critical process performance, and providing guidance to decision-making when coping with process deviations. During process development, the system can be used to evaluate the design space for process operation. During manufacture, the system can be applied to identify abnormal process operation events and then to provide suggestions as to how best to cope with the deviations. In all cases, the function of the system is to ensure an efficient manufacturing process. The implementation of the agent-based approach is illustrated via selected application scenarios, which demonstrate how such a framework may enable the better integration of process operations by providing a plant-wide process description to facilitate process improvement. Copyright 2009 American Institute of Chemical Engineers

  6. Characterization of tillage effects on soil permeability using different measures of macroporosity derived from tension infiltrometry

    NASA Astrophysics Data System (ADS)

    Bodner, G.; Schwen, A.; Scholl, P.; Kammerer, G.; Buchan, G.; Kaul, H.-P.; Loiskandl, W.

    2010-05-01

    Soil macroporosity is a highly dynamic property influenced by environmental factors, such as raindrop impact, wetting-drying and freezing-thawing cycles, soil biota and plant roots, as well as agricultural management measures. Macroporosity represents an important indicator of soil physical quality, particularly in relation to the site specific water transmission properties, and can be used as a sensitive measure to assess soil structural degradation. Its quantification is also required for the parameterization of dual porosity models that are frequently used in environmental impact studies on erosion and solute (pesticide, nitrate) leaching. The importance of soil macroporosity for the water transport properties of the soil and its complexity due to high spatio-temporal heterogeneity make its quantitative assessment still a challenging task. Tension infiltrometers have been shown to be adequate measurement devices to obtain data in the near-saturated range of water flow where structural (macro)pores are dominating the transport process. Different methods have been used to derive water transmission characteristics from tension infiltrometer measurements. Moret and Arrúe (2007) differentiated between using a minimum equivalent capillary pore radius and a flow weighted mean pore radius to obtain representative macropore flow properties from tension infiltrometer data. Beside direct approaches based on Wooding's equation, also inverse methods have been applied to obtain soil hydraulic properties (Šimůnek et al. 1998). Using a dual porosity model in the inverse procedure allows estimating parameters in the dynamic near-saturated range by numerical optimization to the infiltration measurements, while fixing parameters in the more stable textural range of small pores using e.g. pressure plate data or even pedotransfer functions. The present work presents a comparison of quantitative measures of soil macroporosity derived from tension infiltrometer data by different approaches (direct vs. inverse evaluation, capillary vs. flow weighted pore radius). We will show the influence of the distinct evaluation procedures on the resulting effective macroporosity, as well as on the relationships between macropore radius and hydraulic conductivity (Moret and Arrúe, 2007) and pore fraction respectively (Carey et al., 2007). The infiltration measurements used in this study were obtained in a long-term tillage trial located in the semi-arid region of Eastern Austria. Measurements were taken five times over the vegetation period, starting immediately after tillage until harvest of the winter wheat crop. Three tillage systems were evaluated, being conventional tillage with plough, minimum tillage with chisel and no-tillage. Additional to infiltration measurements, also soil water content was monitored continuously by a capacitance probe in all three replicates of each tillage treatment in 10, 20 and 40 cm soil depth. Water content time series are used to derive flow velocity in the wet range by cross-correlation analysis (Wu et al., 1997). This effective parameter of water transmission will then be compared to the flow behaviour expected from the characterization of soil macroporosity. We will show that mainly in no-tillage systems large macropores contribute essentially to flow and therefore the decision on pore measure and evaluation procedure to be used leads to substantial differences. For a detailed comparison of tillage effects on soil hydraulic properties it is therefore essential to analyse the contribution of different tension infiltrometry based evaluation methods to explain effective water transmission through the complex porous network of the soil. References Carey, S.K., Quinton, W.L., Goeller, N.T. 2007. Field and laboratory estimates of pore size properties and hydraulic characteristics for subarctic organic soils. Hydrol. Process. 21, 2560-2571. Moret, D., Arrúe, J.L. 2007. Characterizing soil water conducting macro- and mesoporosity as influences by tillage using tension infiltrmetry. Soil Sci. Soc. Am. J. 71, 500-506. Šimůnek, J., Wang Dong, Shouse, P. J., van Genuchten, M. T. 1998. Analysis of field tension disc infiltrometer data by parameter estimation. Int. Agrophys. 12. 167-180. Wu, L., Jury, W.A., Chang, A.C. 1997. Time series analysis of field-measured watr content of a sandy soil. Soil Sci. Soc. Am. J. 61. 742-745.

  7. Electricity from sunlight. [low cost silicon for solar cells

    NASA Technical Reports Server (NTRS)

    Yaws, C. L.; Miller, J. W.; Lutwack, R.; Hsu, G.

    1978-01-01

    The paper discusses a number of new unconventional processes proposed for the low-cost production of silicon for solar cells. Consideration is given to: (1) the Battelle process (Zn/SiCl4), (2) the Battelle process (SiI4), (3) the Silane process, (4) the Motorola process (SiF4/SiF2), (5) the Westinghouse process (Na/SiCl4), (6) the Dow Corning process (C/SiO2), (7) the AeroChem process (SiCl4/H atom), and the Stanford process (Na/SiF4). Preliminary results indicate that the conventional process and the SiI4 processes cannot meet the project goal of $10/kg by 1986. Preliminary cost evaluation results for the Zn/SiCl4 process are favorable.

  8. Composing Models of Geographic Physical Processes

    NASA Astrophysics Data System (ADS)

    Hofer, Barbara; Frank, Andrew U.

    Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.

  9. Process-based tolerance assessment of connecting rod machining process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, P. Srinivasa; Surendra Babu, B.

    2016-06-01

    Process tolerancing based on the process capability studies is the optimistic and pragmatic approach of determining the manufacturing process tolerances. On adopting the define-measure-analyze-improve-control approach, the process potential capability index ( C p) and the process performance capability index ( C pk) values of identified process characteristics of connecting rod machining process are achieved to be greater than the industry benchmark of 1.33, i.e., four sigma level. The tolerance chain diagram methodology is applied to the connecting rod in order to verify the manufacturing process tolerances at various operations of the connecting rod manufacturing process. This paper bridges the gap between the existing dimensional tolerances obtained via tolerance charting and process capability studies of the connecting rod component. Finally, the process tolerancing comparison has been done by adopting a tolerance capability expert software.

  10. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2014-01-07

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  11. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2013-07-23

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  12. Canadian Libraries and Mass Deacidification.

    ERIC Educational Resources Information Center

    Pacey, Antony

    1992-01-01

    Considers the advantages and disadvantages of six mass deacidification processes that libraries can use to salvage printed materials: the Wei T'o process, the Diethyl Zinc (DEZ) process, the FMC (Lithco) process, the Book Preservation Associates (BPA) process, the "Bookkeeper" process, and the "Lyophilization" process. The…

  13. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  14. Depth-of-processing effects on priming in stem completion: tests of the voluntary-contamination, conceptual-processing, and lexical-processing hypotheses.

    PubMed

    Richardson-Klavehn, A; Gardiner, J M

    1998-05-01

    Depth-of-processing effects on incidental perceptual memory tests could reflect (a) contamination by voluntary retrieval, (b) sensitivity of involuntary retrieval to prior conceptual processing, or (c) a deficit in lexical processing during graphemic study tasks that affects involuntary retrieval. The authors devised an extension of incidental test methodology--making conjunctive predictions about response times as well as response proportions--to discriminate among these alternatives. They used graphemic, phonemic, and semantic study tasks, and a word-stem completion test with incidental, intentional, and inclusion instructions. Semantic study processing was superior to phonemic study processing in the intentional and inclusion tests, but semantic and phonemic study processing produced equal priming in the incidental test, showing that priming was uncontaminated by voluntary retrieval--a conclusion reinforced by the response-time data--and that priming was insensitive to prior conceptual processing. The incidental test nevertheless showed a priming deficit following graphemic study processing, supporting the lexical-processing hypothesis. Adding a lexical decision to the 3 study tasks eliminated the priming deficit following graphemic study processing, but did not influence priming following phonemic and semantic processing. The results provide the first clear evidence that depth-of-processing effects on perceptual priming can reflect lexical processes, rather than voluntary contamination or conceptual processes.

  15. Improving operational anodising process performance using simulation approach

    NASA Astrophysics Data System (ADS)

    Liong, Choong-Yeun; Ghazali, Syarah Syahidah

    2015-10-01

    The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.

  16. Value-driven process management: using value to improve processes.

    PubMed

    Melnyk, S A; Christensen, R T

    2000-08-01

    Every firm can be viewed as consisting of various processes. These processes affect everything that the firm does from accepting orders and designing products to scheduling production. In many firms, the management of processes often reflects considerations of efficiency (cost) rather than effectiveness (value). In this article, we introduce a well-structured process for managing processes that begins not with the process, but rather with the customer and the product and the concept of value. This process progresses through a number of steps which include issues such as defining value, generating the appropriate metrics, identifying the critical processes, mapping and assessing the performance of these processes, and identifying long- and short-term areas for action. What makes the approach presented in this article so powerful is that it explicitly links the customer to the process and that the process is evaluated in term of its ability to effectively serve the customers.

  17. Method for routing events from key strokes in a multi-processing computer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, D.A.; Rustici, E.; Carter, K.H.

    1990-01-23

    The patent describes a method of routing user input in a computer system which concurrently runs a plurality of processes. It comprises: generating keycodes representative of keys typed by a user; distinguishing generated keycodes by looking up each keycode in a routing table which assigns each possible keycode to an individual assigned process of the plurality of processes, one of which processes being a supervisory process; then, sending each keycode to its assigned process until a keycode assigned to the supervisory process is received; sending keycodes received subsequent to the keycode assigned to the supervisory process to a buffer; next,more » providing additional keycodes to the supervisory process from the buffer until the supervisory process has completed operation; and sending keycodes stored in the buffer to processes assigned therewith after the supervisory process has completedoperation.« less

  18. Issues Management Process Course # 38401

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binion, Ula Marie

    The purpose of this training it to advise Issues Management Coordinators (IMCs) on the revised Contractor Assurance System (CAS) Issues Management (IM) process. Terminal Objectives: Understand the Laboratory’s IM process; Understand your role in the Laboratory’s IM process. Learning Objectives: Describe the IM process within the context of the CAS; Describe the importance of implementing an institutional IM process at LANL; Describe the process flow for the Laboratory’s IM process; Apply the definition of an issue; Use available resources to determine initial screening risk levels for issues; Describe the required major process steps for each risk level; Describe the personnelmore » responsibilities for IM process implementation; Access available resources to support IM process implementation.« less

  19. Social network supported process recommender system.

    PubMed

    Ye, Yanming; Yin, Jianwei; Xu, Yueshen

    2014-01-01

    Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced.

  20. [Definition and stabilization of processes I. Management processes and support in a Urology Department].

    PubMed

    Pascual, Carlos; Luján, Marcos; Mora, José Ramón; Chiva, Vicente; Gamarra, Manuela

    2015-01-01

    The implantation of total quality management models in clinical departments can better adapt to the 2009 ISO 9004 model. An essential part of implantation of these models is the establishment of processes and their stabilization. There are four types of processes: key, management, support and operative (clinical). Management processes have four parts: process stabilization form, process procedures form, medical activities cost estimation form and, process flow chart. In this paper we will detail the creation of an essential process in a surgical department, such as the process of management of the surgery waiting list.

  1. T-Check in Technologies for Interoperability: Business Process Management in a Web Services Context

    DTIC Science & Technology

    2008-09-01

    UML Sequence Diagram) 6  Figure 3:   BPMN Diagram of the Order Processing Business Process 9  Figure 4:   T-Check Process for Technology Evaluation 10...Figure 5:  Notional System Architecture 12  Figure 6:  Flow Chart of the Order Processing Business Process 14  Figure 7:  Order Processing Activities...features. Figure 3 (created with Intalio BPMS Designer [Intalio 2008]) shows a BPMN view of the Order Processing business process that is used in the

  2. A case study: application of statistical process control tool for determining process capability and sigma level.

    PubMed

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.

  3. A practical approach for exploration and modeling of the design space of a bacterial vaccine cultivation process.

    PubMed

    Streefland, M; Van Herpen, P F G; Van de Waterbeemd, B; Van der Pol, L A; Beuvery, E C; Tramper, J; Martens, D E; Toft, M

    2009-10-15

    A licensed pharmaceutical process is required to be executed within the validated ranges throughout the lifetime of product manufacturing. Changes to the process, especially for processes involving biological products, usually require the manufacturer to demonstrate that the safety and efficacy of the product remains unchanged by new or additional clinical testing. Recent changes in the regulations for pharmaceutical processing allow broader ranges of process settings to be submitted for regulatory approval, the so-called process design space, which means that a manufacturer can optimize his process within the submitted ranges after the product has entered the market, which allows flexible processes. In this article, the applicability of this concept of the process design space is investigated for the cultivation process step for a vaccine against whooping cough disease. An experimental design (DoE) is applied to investigate the ranges of critical process parameters that still result in a product that meets specifications. The on-line process data, including near infrared spectroscopy, are used to build a descriptive model of the processes used in the experimental design. Finally, the data of all processes are integrated in a multivariate batch monitoring model that represents the investigated process design space. This article demonstrates how the general principles of PAT and process design space can be applied for an undefined biological product such as a whole cell vaccine. The approach chosen for model development described here, allows on line monitoring and control of cultivation batches in order to assure in real time that a process is running within the process design space.

  4. Processing approaches to cognition: the impetus from the levels-of-processing framework.

    PubMed

    Roediger, Henry L; Gallo, David A; Geraci, Lisa

    2002-01-01

    Processing approaches to cognition have a long history, from act psychology to the present, but perhaps their greatest boost was given by the success and dominance of the levels-of-processing framework. We review the history of processing approaches, and explore the influence of the levels-of-processing approach, the procedural approach advocated by Paul Kolers, and the transfer-appropriate processing framework. Processing approaches emphasise the procedures of mind and the idea that memory storage can be usefully conceptualised as residing in the same neural units that originally processed information at the time of encoding. Processing approaches emphasise the unity and interrelatedness of cognitive processes and maintain that they can be dissected into separate faculties only by neglecting the richness of mental life. We end by pointing to future directions for processing approaches.

  5. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  6. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  7. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  8. Social Network Supported Process Recommender System

    PubMed Central

    Ye, Yanming; Yin, Jianwei; Xu, Yueshen

    2014-01-01

    Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced. PMID:24672309

  9. A model for process representation and synthesis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thomas, R. H.

    1971-01-01

    The problem of representing groups of loosely connected processes is investigated, and a model for process representation useful for synthesizing complex patterns of process behavior is developed. There are three parts, the first part isolates the concepts which form the basis for the process representation model by focusing on questions such as: What is a process; What is an event; Should one process be able to restrict the capabilities of another? The second part develops a model for process representation which captures the concepts and intuitions developed in the first part. The model presented is able to describe both the internal structure of individual processes and the interface structure between interacting processes. Much of the model's descriptive power derives from its use of the notion of process state as a vehicle for relating the internal and external aspects of process behavior. The third part demonstrates by example that the model for process representation is a useful one for synthesizing process behavior patterns. In it the model is used to define a variety of interesting process behavior patterns. The dissertation closes by suggesting how the model could be used as a semantic base for a very potent language extension facility.

  10. Process and Post-Process: A Discursive History.

    ERIC Educational Resources Information Center

    Matsuda, Paul Kei

    2003-01-01

    Examines the history of process and post-process in composition studies, focusing on ways in which terms, such as "current-traditional rhetoric,""process," and "post-process" have contributed to the discursive construction of reality. Argues that use of the term post-process in the context of second language writing needs to be guided by a…

  11. Improving operational anodising process performance using simulation approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liong, Choong-Yeun, E-mail: lg@ukm.edu.my; Ghazali, Syarah Syahidah, E-mail: syarah@gapps.kptm.edu.my

    The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist ofmore » five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.« less

  12. Feller processes: the next generation in modeling. Brownian motion, Lévy processes and beyond.

    PubMed

    Böttcher, Björn

    2010-12-03

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes.

  13. Feller Processes: The Next Generation in Modeling. Brownian Motion, Lévy Processes and Beyond

    PubMed Central

    Böttcher, Björn

    2010-01-01

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular Brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes. PMID:21151931

  14. AIRSAR Automated Web-based Data Processing and Distribution System

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen

    2005-01-01

    In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.

  15. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  16. Electrotechnologies to process foods

    USDA-ARS?s Scientific Manuscript database

    Electrical energy is being used to process foods. In conventional food processing plants, electricity drives mechanical devices and controls the degree of process. In recent years, several processing technologies are being developed to process foods directly with electricity. Electrotechnologies use...

  17. Challenges associated with the implementation of the nursing process: A systematic review.

    PubMed

    Zamanzadeh, Vahid; Valizadeh, Leila; Tabrizi, Faranak Jabbarzadeh; Behshid, Mojghan; Lotfi, Mojghan

    2015-01-01

    Nursing process is a scientific approach in the provision of qualified nursing cares. However, in practice, the implementation of this process is faced with numerous challenges. With the knowledge of the challenges associated with the implementation of the nursing process, the nursing processes can be developed appropriately. Due to the lack of comprehensive information on this subject, the current study was carried out to assess the key challenges associated with the implementation of the nursing process. To achieve and review related studies on this field, databases of Iran medix, SID, Magiran, PUBMED, Google scholar, and Proquest were assessed using the main keywords of nursing process and nursing process systematic review. The articles were retrieved in three steps including searching by keywords, review of the proceedings based on inclusion criteria, and final retrieval and assessment of available full texts. Systematic assessment of the articles showed different challenges in implementation of the nursing process. Intangible understanding of the concept of nursing process, different views of the process, lack of knowledge and awareness among nurses related to the execution of process, supports of managing systems, and problems related to recording the nursing process were the main challenges that were extracted from review of literature. On systematically reviewing the literature, intangible understanding of the concept of nursing process has been identified as the main challenge in nursing process. To achieve the best strategy to minimize the challenge, in addition to preparing facilitators for implementation of nursing process, intangible understanding of the concept of nursing process, different views of the process, and forming teams of experts in nursing education are recommended for internalizing the nursing process among nurses.

  18. Challenges associated with the implementation of the nursing process: A systematic review

    PubMed Central

    Zamanzadeh, Vahid; Valizadeh, Leila; Tabrizi, Faranak Jabbarzadeh; Behshid, Mojghan; Lotfi, Mojghan

    2015-01-01

    Background: Nursing process is a scientific approach in the provision of qualified nursing cares. However, in practice, the implementation of this process is faced with numerous challenges. With the knowledge of the challenges associated with the implementation of the nursing process, the nursing processes can be developed appropriately. Due to the lack of comprehensive information on this subject, the current study was carried out to assess the key challenges associated with the implementation of the nursing process. Materials and Methods: To achieve and review related studies on this field, databases of Iran medix, SID, Magiran, PUBMED, Google scholar, and Proquest were assessed using the main keywords of nursing process and nursing process systematic review. The articles were retrieved in three steps including searching by keywords, review of the proceedings based on inclusion criteria, and final retrieval and assessment of available full texts. Results: Systematic assessment of the articles showed different challenges in implementation of the nursing process. Intangible understanding of the concept of nursing process, different views of the process, lack of knowledge and awareness among nurses related to the execution of process, supports of managing systems, and problems related to recording the nursing process were the main challenges that were extracted from review of literature. Conclusions: On systematically reviewing the literature, intangible understanding of the concept of nursing process has been identified as the main challenge in nursing process. To achieve the best strategy to minimize the challenge, in addition to preparing facilitators for implementation of nursing process, intangible understanding of the concept of nursing process, different views of the process, and forming teams of experts in nursing education are recommended for internalizing the nursing process among nurses. PMID:26257793

  19. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  20. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  1. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    DOEpatents

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  2. SEMICONDUCTOR TECHNOLOGY A signal processing method for the friction-based endpoint detection system of a CMP process

    NASA Astrophysics Data System (ADS)

    Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang

    2010-12-01

    A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.

  3. Composite faces are not (necessarily) processed coactively: A test using systems factorial technology and logical-rule models.

    PubMed

    Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R

    2018-06-01

    Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  5. DESIGNING ENVIRONMENTAL, ECONOMIC AND ENERGY EFFICIENT CHEMICAL PROCESSES

    EPA Science Inventory

    The design and improvement of chemical processes can be very challenging. The earlier energy conservation, process economics and environmental aspects are incorporated into the process development, the easier and less expensive it is to alter the process design. Process emissio...

  6. Reversing the conventional leather processing sequence for cleaner leather production.

    PubMed

    Saravanabhavan, Subramani; Thanikaivelan, Palanisamy; Rao, Jonnalagadda Raghava; Nair, Balachandran Unni; Ramasami, Thirumalachari

    2006-02-01

    Conventional leather processing generally involves a combination of single and multistep processes that employs as well as expels various biological, inorganic, and organic materials. It involves nearly 14-15 steps and discharges a huge amount of pollutants. This is primarily due to the fact that conventional leather processing employs a "do-undo" process logic. In this study, the conventional leather processing steps have been reversed to overcome the problems associated with the conventional method. The charges of the skin matrix and of the chemicals and pH profiles of the process have been judiciously used for reversing the process steps. This reversed process eventually avoids several acidification and basification/neutralization steps used in conventional leather processing. The developed process has been validated through various analyses such as chromium content, shrinkage temperature, softness measurements, scanning electron microscopy, and physical testing of the leathers. Further, the performance of the leathers is shown to be on par with conventionally processed leathers through bulk property evaluation. The process enjoys a significant reduction in COD and TS by 53 and 79%, respectively. Water consumption and discharge is reduced by 65 and 64%, respectively. Also, the process benefits from significant reduction in chemicals, time, power, and cost compared to the conventional process.

  7. Group processing in an undergraduate biology course for preservice teachers: Experiences and attitudes

    NASA Astrophysics Data System (ADS)

    Schellenberger, Lauren Brownback

    Group processing is a key principle of cooperative learning in which small groups discuss their strengths and weaknesses and set group goals or norms. However, group processing has not been well-studied at the post-secondary level or from a qualitative or mixed methods perspective. This mixed methods study uses a phenomenological framework to examine the experience of group processing for students in an undergraduate biology course for preservice teachers. The effect of group processing on students' attitudes toward future group work and group processing is also examined. Additionally, this research investigated preservice teachers' plans for incorporating group processing into future lessons. Students primarily experienced group processing as a time to reflect on past performance. Also, students experienced group processing as a time to increase communication among group members and become motivated for future group assignments. Three factors directly influenced students' experiences with group processing: (1) previous experience with group work, (2) instructor interaction, and (3) gender. Survey data indicated that group processing had a slight positive effect on students' attitudes toward future group work and group processing. Participants who were interviewed felt that group processing was an important part of group work and that it had increased their group's effectiveness as well as their ability to work effectively with other people. Participants held positive views on group work prior to engaging in group processing, and group processing did not alter their atittude toward group work. Preservice teachers who were interviewed planned to use group work and a modified group processing protocol in their future classrooms. They also felt that group processing had prepared them for their future professions by modeling effective collaboration and group skills. Based on this research, a new model for group processing has been created which includes extensive instructor interaction and additional group processing sessions. This study offers a new perspective on the phenomenon of group processing and informs science educators and teacher educators on the effective implementation of this important component of small-group learning.

  8. Properties of the Bivariate Delayed Poisson Process

    DTIC Science & Technology

    1974-07-01

    and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a

  9. The Application of Six Sigma Methodologies to University Processes: The Use of Student Teams

    ERIC Educational Resources Information Center

    Pryor, Mildred Golden; Alexander, Christine; Taneja, Sonia; Tirumalasetty, Sowmya; Chadalavada, Deepthi

    2012-01-01

    The first student Six Sigma team (activated under a QEP Process Sub-team) evaluated the course and curriculum approval process. The goal was to streamline the process and thereby shorten process cycle time and reduce confusion about how the process works. Members of this team developed flowcharts on how the process is supposed to work (by…

  10. Impact of Radio Frequency Identification (RFID) on the Marine Corps’ Supply Process

    DTIC Science & Technology

    2006-09-01

    Hypothetical Improvement Using a Real-Time Order Processing System Vice a Batch Order Processing System ................56 3. As-Is: The Current... Processing System Vice a Batch Order Processing System ................58 V. RESULTS ................................................69 A. SIMULATION...Time: Hypothetical Improvement Using a Real-Time Order Processing System Vice a Batch Order Processing System ................71 3. As-Is: The

  11. Global-local processing relates to spatial and verbal processing: implications for sex differences in cognition.

    PubMed

    Pletzer, Belinda; Scheuringer, Andrea; Scherndl, Thomas

    2017-09-05

    Sex differences have been reported for a variety of cognitive tasks and related to the use of different cognitive processing styles in men and women. It was recently argued that these processing styles share some characteristics across tasks, i.e. male approaches are oriented towards holistic stimulus aspects and female approaches are oriented towards stimulus details. In that respect, sex-dependent cognitive processing styles share similarities with attentional global-local processing. A direct relationship between cognitive processing and global-local processing has however not been previously established. In the present study, 49 men and 44 women completed a Navon paradigm and a Kimchi Palmer task as well as a navigation task and a verbal fluency task with the goal to relate the global advantage (GA) effect as a measure of global processing to holistic processing styles in both tasks. Indeed participants with larger GA effects displayed more holistic processing during spatial navigation and phonemic fluency. However, the relationship to cognitive processing styles was modulated by the specific condition of the Navon paradigm, as well as the sex of participants. Thus, different types of global-local processing play different roles for cognitive processing in men and women.

  12. 21 CFR 113.83 - Establishing scheduled processes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...

  13. 21 CFR 113.83 - Establishing scheduled processes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...

  14. 21 CFR 113.83 - Establishing scheduled processes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...

  15. A mathematical study of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.

  16. Standard services for the capture, processing, and distribution of packetized telemetry data

    NASA Technical Reports Server (NTRS)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  17. Assessment of hospital processes using a process mining technique: Outpatient process analysis at a tertiary hospital.

    PubMed

    Yoo, Sooyoung; Cho, Minsu; Kim, Eunhye; Kim, Seok; Sim, Yerim; Yoo, Donghyun; Hwang, Hee; Song, Minseok

    2016-04-01

    Many hospitals are increasing their efforts to improve processes because processes play an important role in enhancing work efficiency and reducing costs. However, to date, a quantitative tool has not been available to examine the before and after effects of processes and environmental changes, other than the use of indirect indicators, such as mortality rate and readmission rate. This study used process mining technology to analyze process changes based on changes in the hospital environment, such as the construction of a new building, and to measure the effects of environmental changes in terms of consultation wait time, time spent per task, and outpatient care processes. Using process mining technology, electronic health record (EHR) log data of outpatient care before and after constructing a new building were analyzed, and the effectiveness of the technology in terms of the process was evaluated. Using the process mining technique, we found that the total time spent in outpatient care did not increase significantly compared to that before the construction of a new building, considering that the number of outpatients increased, and the consultation wait time decreased. These results suggest that the operation of the outpatient clinic was effective after changes were implemented in the hospital environment. We further identified improvements in processes using the process mining technique, thereby demonstrating the usefulness of this technique for analyzing complex hospital processes at a low cost. This study confirmed the effectiveness of process mining technology at an actual hospital site. In future studies, the use of process mining technology will be expanded by applying this approach to a larger variety of process change situations. Copyright © 2016. Published by Elsevier Ireland Ltd.

  18. Study of process variables associated with manufacturing hermetically-sealed nickel-cadmium cells

    NASA Technical Reports Server (NTRS)

    Miller, L.

    1974-01-01

    A two year study of the major process variables associated with the manufacturing process for sealed, nickel-cadmium, areospace cells is summarized. Effort was directed toward identifying the major process variables associated with a manufacturing process, experimentally assessing each variable's effect, and imposing the necessary changes (optimization) and controls for the critical process variables to improve results and uniformity. A critical process variable associated with the sintered nickel plaque manufacturing process was identified as the manual forming operation. Critical process variables identified with the positive electrode impregnation/polarization process were impregnation solution temperature, free acid content, vacuum impregnation, and sintered plaque strength. Positive and negative electrodes were identified as a major source of carbonate contamination in sealed cells.

  19. Monitoring autocorrelated process: A geometric Brownian motion process approach

    NASA Astrophysics Data System (ADS)

    Li, Lee Siaw; Djauhari, Maman A.

    2013-09-01

    Autocorrelated process control is common in today's modern industrial process control practice. The current practice of autocorrelated process control is to eliminate the autocorrelation by using an appropriate model such as Box-Jenkins models or other models and then to conduct process control operation based on the residuals. In this paper we show that many time series are governed by a geometric Brownian motion (GBM) process. Therefore, in this case, by using the properties of a GBM process, we only need an appropriate transformation and model the transformed data to come up with the condition needs in traditional process control. An industrial example of cocoa powder production process in a Malaysian company will be presented and discussed to illustrate the advantages of the GBM approach.

  20. Meta-control of combustion performance with a data mining approach

    NASA Astrophysics Data System (ADS)

    Song, Zhe

    Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.

  1. 5 CFR 1653.13 - Processing legal processes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 3 2014-01-01 2014-01-01 false Processing legal processes. 1653.13 Section 1653.13 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Legal Process for the Enforcement of a Participant's Legal...

  2. 5 CFR 1653.13 - Processing legal processes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Processing legal processes. 1653.13 Section 1653.13 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Legal Process for the Enforcement of a Participant's Legal...

  3. A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji

    Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.

  4. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS) data. Application and comparative study of selected tools.

    PubMed

    O'Callaghan, Sean; De Souza, David P; Isaac, Andrew; Wang, Qiao; Hodkinson, Luke; Olshansky, Moshe; Erwin, Tim; Appelbe, Bill; Tull, Dedreia L; Roessner, Ute; Bacic, Antony; McConville, Malcolm J; Likić, Vladimir A

    2012-05-30

    Gas chromatography-mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS). PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface.

  5. Processing mode during repetitive thinking in socially anxious individuals: evidence for a maladaptive experiential mode.

    PubMed

    Wong, Quincy J J; Moulds, Michelle L

    2012-12-01

    Evidence from the depression literature suggests that an analytical processing mode adopted during repetitive thinking leads to maladaptive outcomes relative to an experiential processing mode. To date, in socially anxious individuals, the impact of processing mode during repetitive thinking related to an actual social-evaluative situation has not been investigated. We thus tested whether an analytical processing mode would be maladaptive relative to an experiential processing mode during anticipatory processing and post-event rumination. High and low socially anxious participants were induced to engage in either an analytical or experiential processing mode during: (a) anticipatory processing before performing a speech (Experiment 1; N = 94), or (b) post-event rumination after performing a speech (Experiment 2; N = 74). Mood, cognition, and behavioural measures were employed to examine the effects of processing mode. For high socially anxious participants, the modes had a similar effect on self-reported anxiety during both anticipatory processing and post-event rumination. Unexpectedly, relative to the analytical mode, the experiential mode led to stronger high standard and conditional beliefs during anticipatory processing, and stronger unconditional beliefs during post-event rumination. These experiments are the first to investigate processing mode during anticipatory processing and post-event rumination. Hence, these results are novel and will need to be replicated. These findings suggest that an experiential processing mode is maladaptive relative to an analytical processing mode during repetitive thinking characteristic of socially anxious individuals. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

    PubMed

    Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L

    2012-11-01

    Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

  7. Modeling interdependencies between business and communication processes in hospitals.

    PubMed

    Brigl, Birgit; Wendt, Thomas; Winter, Alfred

    2003-01-01

    The optimization and redesign of business processes in hospitals is an important challenge for the hospital information management who has to design and implement a suitable HIS architecture. Nevertheless, there are no tools available specializing in modeling information-driven business processes and the consequences on the communication between information processing, tools. Therefore, we will present an approach which facilitates the representation and analysis of business processes and resulting communication processes between application components and their interdependencies. This approach aims not only to visualize those processes, but to also to evaluate if there are weaknesses concerning the information processing infrastructure which hinder the smooth implementation of the business processes.

  8. Life cycle analysis within pharmaceutical process optimization and intensification: case study of active pharmaceutical ingredient production.

    PubMed

    Ott, Denise; Kralisch, Dana; Denčić, Ivana; Hessel, Volker; Laribi, Yosra; Perrichon, Philippe D; Berguerand, Charline; Kiwi-Minsker, Lioubov; Loeb, Patrick

    2014-12-01

    As the demand for new drugs is rising, the pharmaceutical industry faces the quest of shortening development time, and thus, reducing the time to market. Environmental aspects typically still play a minor role within the early phase of process development. Nevertheless, it is highly promising to rethink, redesign, and optimize process strategies as early as possible in active pharmaceutical ingredient (API) process development, rather than later at the stage of already established processes. The study presented herein deals with a holistic life-cycle-based process optimization and intensification of a pharmaceutical production process targeting a low-volume, high-value API. Striving for process intensification by transfer from batch to continuous processing, as well as an alternative catalytic system, different process options are evaluated with regard to their environmental impact to identify bottlenecks and improvement potentials for further process development activities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. SOI-CMOS Process for Monolithic, Radiation-Tolerant, Science-Grade Imagers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, George; Lee, Adam

    In Phase I, Voxtel worked with Jazz and Sandia to document and simulate the processes necessary to implement a DH-BSI SOI CMOS imaging process. The development is based upon mature SOI CMOS process at both fabs, with the addition of only a few custom processing steps for integration and electrical interconnection of the fully-depleted photodetectors. In Phase I, Voxtel also characterized the Sandia process, including the CMOS7 design rules, and we developed the outline of a process option that included a “BOX etch”, that will permit a “detector in handle” SOI CMOS process to be developed The process flows weremore » developed in cooperation with both Jazz and Sandia process engineers, along with detailed TCAD modeling and testing of the photodiode array architectures. In addition, Voxtel tested the radiation performance of the Jazz’s CA18HJ process, using standard and circular-enclosed transistors.« less

  10. Face to face with emotion: holistic face processing is modulated by emotional state.

    PubMed

    Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa

    2012-01-01

    Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.

  11. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  12. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  13. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  14. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  15. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  16. 20 CFR 405.725 - Effect of expedited appeals process agreement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... PROCESS FOR ADJUDICATING INITIAL DISABILITY CLAIMS Expedited Appeals Process for Constitutional Issues § 405.725 Effect of expedited appeals process agreement. After an expedited appeals process agreement is... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Effect of expedited appeals process agreement...

  17. Common and distinct networks for self-referential and social stimulus processing in the human brain.

    PubMed

    Herold, Dorrit; Spengler, Stephanie; Sajonz, Bastian; Usnich, Tatiana; Bermpohl, Felix

    2016-09-01

    Self-referential processing is a complex cognitive function, involving a set of implicit and explicit processes, complicating investigation of its distinct neural signature. The present study explores the functional overlap and dissociability of self-referential and social stimulus processing. We combined an established paradigm for explicit self-referential processing with an implicit social stimulus processing paradigm in one fMRI experiment to determine the neural effects of self-relatedness and social processing within one study. Overlapping activations were found in the orbitofrontal cortex and in the intermediate part of the precuneus. Stimuli judged as self-referential specifically activated the posterior cingulate cortex, the ventral medial prefrontal cortex, extending into anterior cingulate cortex and orbitofrontal cortex, the dorsal medial prefrontal cortex, the ventral and dorsal lateral prefrontal cortex, the left inferior temporal gyrus, and occipital cortex. Social processing specifically involved the posterior precuneus and bilateral temporo-parietal junction. Taken together, our data show, not only, first, common networks for both processes in the medial prefrontal and the medial parietal cortex, but also, second, functional differentiations for self-referential processing versus social processing: an anterior-posterior gradient for social processing and self-referential processing within the medial parietal cortex and specific activations for self-referential processing in the medial and lateral prefrontal cortex and for social processing in the temporo-parietal junction.

  18. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    PubMed

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  19. Use of Analogies in the Study of Diffusion

    ERIC Educational Resources Information Center

    Letic, Milorad

    2014-01-01

    Emergent processes, such as diffusion, are considered more difficult to understand than direct processes. In physiology, most processes are presented as direct processes, so emergent processes, when encountered, are even more difficult to understand. It has been suggested that, when studying diffusion, misconceptions about random processes are the…

  20. Is Analytic Information Processing a Feature of Expertise in Medicine?

    ERIC Educational Resources Information Center

    McLaughlin, Kevin; Rikers, Remy M.; Schmidt, Henk G.

    2008-01-01

    Diagnosing begins by generating an initial diagnostic hypothesis by automatic information processing. Information processing may stop here if the hypothesis is accepted, or analytical processing may be used to refine the hypothesis. This description portrays analytic processing as an optional extra in information processing, leading us to…

  1. 5 CFR 582.305 - Honoring legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...

  2. 5 CFR 582.305 - Honoring legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...

  3. 5 CFR 581.305 - Honoring legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...

  4. 5 CFR 581.305 - Honoring legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...

  5. 5 CFR 582.305 - Honoring legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...

  6. 5 CFR 581.305 - Honoring legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...

  7. Articulating the Resources for Business Process Analysis and Design

    ERIC Educational Resources Information Center

    Jin, Yulong

    2012-01-01

    Effective process analysis and modeling are important phases of the business process management lifecycle. When many activities and multiple resources are involved, it is very difficult to build a correct business process specification. This dissertation provides a resource perspective of business processes. It aims at a better process analysis…

  8. An Integrated Model of Emotion Processes and Cognition in Social Information Processing.

    ERIC Educational Resources Information Center

    Lemerise, Elizabeth A.; Arsenio, William F.

    2000-01-01

    Interprets literature on contributions of social cognitive and emotion processes to children's social competence in the context of an integrated model of emotion processes and cognition in social information processing. Provides neurophysiological and functional evidence for the centrality of emotion processes in personal-social decision making.…

  9. Data Processing and First Products from the Hyperspectral Imager for the Coastal Ocean (HICO) on the International Space Station

    DTIC Science & Technology

    2010-04-01

    NRL Stennis Space Center (NRL-SSC) for further processing using the NRL SSC Automated Processing System (APS). APS was developed for processing...have not previously developed automated processing for 73 hyperspectral ocean color data. The hyperspectral processing branch includes several

  10. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  11. Management of processes of electrochemical dimensional processing

    NASA Astrophysics Data System (ADS)

    Akhmetov, I. D.; Zakirova, A. R.; Sadykov, Z. B.

    2017-09-01

    In different industries a lot high-precision parts are produced from hard-processed scarce materials. Forming such details can only be acting during non-contact processing, or a minimum of effort, and doable by the use, for example, of electro-chemical processing. At the present stage of development of metal working processes are important management issues electrochemical machining and its automation. This article provides some indicators and factors of electrochemical machining process.

  12. The Hyperspectral Imager for the Coastal Ocean (HICO): Sensor and Data Processing Overview

    DTIC Science & Technology

    2010-01-20

    backscattering coefficients, and others. Several of these software modules will be developed within the Automated Processing System (APS), a data... Automated Processing System (APS) NRL developed APS, which processes satellite data into ocean color data products. APS is a collection of methods...used for ocean color processing which provide the tools for the automated processing of satellite imagery [1]. These tools are in the process of

  13. [Study on culture and philosophy of processing of traditional Chinese medicines].

    PubMed

    Yang, Ming; Zhang, Ding-Kun; Zhong, Ling-Yun; Wang, Fang

    2013-07-01

    According to cultural views and philosophical thoughts, this paper studies the cultural origin, thinking modes, core principles, general regulation and methods of processing, backtracks processing's culture and history which contains generation and deduction process, experienced and promoting process, and core value, summarizes processing's basic principles which are directed by holistic, objective, dynamic, balanced and appropriate thoughts; so as to propagate cultural characteristic and philosophical wisdom of traditional Chinese medicine processing, to promote inheritance and development of processing and to ensure the maximum therapeutic value of Chinese medical clinical.

  14. Containerless automated processing of intermetallic compounds and composites

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Joslin, S. M.; Reviere, R. D.; Oliver, B. F.; Noebe, R. D.

    1993-01-01

    An automated containerless processing system has been developed to directionally solidify high temperature materials, intermetallic compounds, and intermetallic/metallic composites. The system incorporates a wide range of ultra-high purity chemical processing conditions. The utilization of image processing for automated control negates the need for temperature measurements for process control. The list of recent systems that have been processed includes Cr, Mo, Mn, Nb, Ni, Ti, V, and Zr containing aluminides. Possible uses of the system, process control approaches, and properties and structures of recently processed intermetallics are reviewed.

  15. A continuous process for the development of Kodak Aerochrome Infrared Film 2443 as a negative

    NASA Astrophysics Data System (ADS)

    Klimes, D.; Ross, D. I.

    1993-02-01

    A process for the continuous dry-to-dry development of Kodak Aerochrome Infrared Film 2443 as a negative (CIR-neg) is described. The process is well suited for production processing of long film lengths. Chemicals from three commercial film processes are used with modifications. Sensitometric procedures are recommended for the monitoring of processing quality control. Sensitometric data and operational aerial exposures indicate that films developed in this process have approximately the same effective aerial film speed as films processed in the reversal process recommended by the manufacturer (Kodak EA-5). The CIR-neg process is useful when aerial photography is acquired for resources management applications which require print reproductions. Originals can be readily reproduced using conventional production equipment (electronic dodging) in black and white or color (color compensation).

  16. Antibiotics with anaerobic ammonium oxidation in urban wastewater treatment

    NASA Astrophysics Data System (ADS)

    Zhou, Ruipeng; Yang, Yuanming

    2017-05-01

    Biofilter process is based on biological oxidation process on the introduction of fast water filter design ideas generated by an integrated filtration, adsorption and biological role of aerobic wastewater treatment process various purification processes. By engineering example, we show that the process is an ideal sewage and industrial wastewater treatment process of low concentration. Anaerobic ammonia oxidation process because of its advantage of the high efficiency and low consumption, wastewater biological denitrification field has broad application prospects. The process in practical wastewater treatment at home and abroad has become a hot spot. In this paper, anammox bacteria habitats and species diversity, and anaerobic ammonium oxidation process in the form of diversity, and one and split the process operating conditions are compared, focusing on a review of the anammox process technology various types of wastewater laboratory research and engineering applications, including general water quality and pressure filtrate sludge digestion, landfill leachate, aquaculture wastewater, monosodium glutamate wastewater, wastewater, sewage, fecal sewage, waste water salinity wastewater characteristics, research progress and application of the obstacles. Finally, we summarize the anaerobic ammonium oxidation process potential problems during the processing of the actual waste water, and proposed future research focus on in-depth study of water quality anammox obstacle factor and its regulatory policy, and vigorously develop on this basis, and combined process optimization.

  17. Understanding scaling through history-dependent processes with collapsing sample space.

    PubMed

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2015-04-28

    History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.

  18. Effects of Processing Parameters on the Forming Quality of C-Shaped Thermosetting Composite Laminates in Hot Diaphragm Forming Process

    NASA Astrophysics Data System (ADS)

    Bian, X. X.; Gu, Y. Z.; Sun, J.; Li, M.; Liu, W. P.; Zhang, Z. G.

    2013-10-01

    In this study, the effects of processing temperature and vacuum applying rate on the forming quality of C-shaped carbon fiber reinforced epoxy resin matrix composite laminates during hot diaphragm forming process were investigated. C-shaped prepreg preforms were produced using a home-made hot diaphragm forming equipment. The thickness variations of the preforms and the manufacturing defects after diaphragm forming process, including fiber wrinkling and voids, were evaluated to understand the forming mechanism. Furthermore, both interlaminar slipping friction and compaction behavior of the prepreg stacks were experimentally analyzed for showing the importance of the processing parameters. In addition, autoclave processing was used to cure the C-shaped preforms to investigate the changes of the defects before and after cure process. The results show that the C-shaped prepreg preforms with good forming quality can be achieved through increasing processing temperature and reducing vacuum applying rate, which obviously promote prepreg interlaminar slipping process. The process temperature and forming rate in hot diaphragm forming process strongly influence prepreg interply frictional force, and the maximum interlaminar frictional force can be taken as a key parameter for processing parameter optimization. Autoclave process is effective in eliminating voids in the preforms and can alleviate fiber wrinkles to a certain extent.

  19. Assessment of Advanced Coal Gasification Processes

    NASA Technical Reports Server (NTRS)

    McCarthy, John; Ferrall, Joseph; Charng, Thomas; Houseman, John

    1981-01-01

    This report represents a technical assessment of the following advanced coal gasification processes: AVCO High Throughput Gasification (HTG) Process; Bell Single-Stage High Mass Flux (HMF) Process; Cities Service/Rockwell (CS/R) Hydrogasification Process; Exxon Catalytic Coal Gasification (CCG) Process. Each process is evaluated for its potential to produce SNG from a bituminous coal. In addition to identifying the new technology these processes represent, key similarities/differences, strengths/weaknesses, and potential improvements to each process are identified. The AVCO HTG and the Bell HMF gasifiers share similarities with respect to: short residence time (SRT), high throughput rate, slagging and syngas as the initial raw product gas. The CS/R Hydrogasifier is also SRT but is non-slagging and produces a raw gas high in methane content. The Exxon CCG gasifier is a long residence time, catalytic, fluidbed reactor producing all of the raw product methane in the gasifier. The report makes the following assessments: 1) while each process has significant potential as coal gasifiers, the CS/R and Exxon processes are better suited for SNG production; 2) the Exxon process is the closest to a commercial level for near-term SNG production; and 3) the SRT processes require significant development including scale-up and turndown demonstration, char processing and/or utilization demonstration, and reactor control and safety features development.

  20. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    PubMed

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  1. Effects of rigor status during high-pressure processing on the physical qualities of farm-raised abalone (Haliotis rufescens).

    PubMed

    Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I

    2015-01-01

    High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®

  2. PROCESSING ALTERNATIVES FOR DESTRUCTION OF TETRAPHENYLBORATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D; Thomas Peters, T; Samuel Fink, S

    Two processes were chosen in the 1980's at the Savannah River Site (SRS) to decontaminate the soluble High Level Waste (HLW). The In Tank Precipitation (ITP) process (1,2) was developed at SRS for the removal of radioactive cesium and actinides from the soluble HLW. Sodium tetraphenylborate was added to the waste to precipitate cesium and monosodium titanate (MST) was added to adsorb actinides, primarily uranium and plutonium. Two products of this process were a low activity waste stream and a concentrated organic stream containing cesium tetraphenylborate and actinides adsorbed on monosodium titanate (MST). A copper catalyzed acid hydrolysis process wasmore » built to process (3, 4) the Tank 48H cesium tetraphenylborate waste in the SRS's Defense Waste Processing Facility (DWPF). Operation of the DWPF would have resulted in the production of benzene for incineration in SRS's Consolidated Incineration Facility. This process was abandoned together with the ITP process in 1998 due to high benzene in ITP caused by decomposition of excess sodium tetraphenylborate. Processing in ITP resulted in the production of approximately 1.0 million liters of HLW. SRS has chosen a solvent extraction process combined with adsorption of the actinides to decontaminate the soluble HLW stream (5). However, the waste in Tank 48H is incompatible with existing waste processing facilities. As a result, a processing facility is needed to disposition the HLW in Tank 48H. This paper will describe the process for searching for processing options by SRS task teams for the disposition of the waste in Tank 48H. In addition, attempts to develop a caustic hydrolysis process for in tank destruction of tetraphenylborate will be presented. Lastly, the development of both a caustic and acidic copper catalyzed peroxide oxidation process will be discussed.« less

  3. Manufacturing Process Selection of Composite Bicycle’s Crank Arm using Analytical Hierarchy Process (AHP)

    NASA Astrophysics Data System (ADS)

    Luqman, M.; Rosli, M. U.; Khor, C. Y.; Zambree, Shayfull; Jahidi, H.

    2018-03-01

    Crank arm is one of the important parts in a bicycle that is an expensive product due to the high cost of material and production process. This research is aimed to investigate the potential type of manufacturing process to fabricate composite bicycle crank arm and to describe an approach based on analytical hierarchy process (AHP) that assists decision makers or manufacturing engineers in determining the most suitable process to be employed in manufacturing of composite bicycle crank arm at the early stage of the product development process to reduce the production cost. There are four types of processes were considered, namely resin transfer molding (RTM), compression molding (CM), vacuum bag molding and filament winding (FW). The analysis ranks these four types of process for its suitability in the manufacturing of bicycle crank arm based on five main selection factors and 10 sub factors. Determining the right manufacturing process was performed based on AHP process steps. Consistency test was performed to make sure the judgements are consistent during the comparison. The results indicated that the compression molding was the most appropriate manufacturing process because it has the highest value (33.6%) among the other manufacturing processes.

  4. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  5. Quantitative analysis of geomorphic processes using satellite image data at different scales

    NASA Technical Reports Server (NTRS)

    Williams, R. S., Jr.

    1985-01-01

    When aerial and satellite photographs and images are used in the quantitative analysis of geomorphic processes, either through direct observation of active processes or by analysis of landforms resulting from inferred active or dormant processes, a number of limitations in the use of such data must be considered. Active geomorphic processes work at different scales and rates. Therefore, the capability of imaging an active or dormant process depends primarily on the scale of the process and the spatial-resolution characteristic of the imaging system. Scale is an important factor in recording continuous and discontinuous active geomorphic processes, because what is not recorded will not be considered or even suspected in the analysis of orbital images. If the geomorphic process of landform change caused by the process is less than 200 m in x to y dimension, then it will not be recorded. Although the scale factor is critical, in the recording of discontinuous active geomorphic processes, the repeat interval of orbital-image acquisition of a planetary surface also is a consideration in order to capture a recurring short-lived geomorphic process or to record changes caused by either a continuous or a discontinuous geomorphic process.

  6. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  7. Microstructure and Texture of Al-2.5wt.%Mg Processed by Combining Accumulative Roll Bonding and Conventional Rolling

    NASA Astrophysics Data System (ADS)

    Gatti, J. R.; Bhattacharjee, P. P.

    2014-12-01

    Evolution of microstructure and texture during severe deformation and annealing was studied in Al-2.5%Mg alloy processed by two different routes, namely, monotonic Accumulative Roll Bonding (ARB) and a hybrid route combining ARB and conventional rolling (CR). For this purpose Al-2.5%Mg sheets were subjected to 5 cycles of monotonic ARB (equivalent strain (ɛeq) = 4.0) processing while in the hybrid route (ARB + CR) 3 cycle ARB-processed sheets were further deformed by conventional rolling to 75% reduction in thickness (ɛeq = 4.0). Although formation of ultrafine structure was observed in the two processing routes, the monotonic ARB—processed material showed finer microstructure but weak texture as compared to the ARB + CR—processed material. After complete recrystallization, the ARB + CR-processed material showed weak cube texture ({001}<100>) but the cube component was almost negligible in the monotonic ARB-processed material-processed material. However, the ND-rotated cube components were stronger in the monotonic ARB-processed material-processed material. The observed differences in the microstructure and texture evolution during deformation and annealing could be explained by the characteristic differences of the two processing routes.

  8. Process Materialization Using Templates and Rules to Design Flexible Process Models

    NASA Astrophysics Data System (ADS)

    Kumar, Akhil; Yao, Wen

    The main idea in this paper is to show how flexible processes can be designed by combining generic process templates and business rules. We instantiate a process by applying rules to specific case data, and running a materialization algorithm. The customized process instance is then executed in an existing workflow engine. We present an architecture and also give an algorithm for process materialization. The rules are written in a logic-based language like Prolog. Our focus is on capturing deeper process knowledge and achieving a holistic approach to robust process design that encompasses control flow, resources and data, as well as makes it easier to accommodate changes to business policy.

  9. HMI conventions for process control graphics.

    PubMed

    Pikaar, Ruud N

    2012-01-01

    Process operators supervise and control complex processes. To enable the operator to do an adequate job, instrumentation and process control engineers need to address several related topics, such as console design, information design, navigation, and alarm management. In process control upgrade projects, usually a 1:1 conversion of existing graphics is proposed. This paper suggests another approach, efficiently leading to a reduced number of new powerful process graphics, supported by a permanent process overview displays. In addition a road map for structuring content (process information) and conventions for the presentation of objects, symbols, and so on, has been developed. The impact of the human factors engineering approach on process control upgrade projects is illustrated by several cases.

  10. A novel processed food classification system applied to Australian food composition databases.

    PubMed

    O'Halloran, S A; Lacy, K E; Grimes, C A; Woods, J; Campbell, K J; Nowson, C A

    2017-08-01

    The extent of food processing can affect the nutritional quality of foodstuffs. Categorising foods by the level of processing emphasises the differences in nutritional quality between foods within the same food group and is likely useful for determining dietary processed food consumption. The present study aimed to categorise foods within Australian food composition databases according to the level of food processing using a processed food classification system, as well as assess the variation in the levels of processing within food groups. A processed foods classification system was applied to food and beverage items contained within Australian Food and Nutrient (AUSNUT) 2007 (n = 3874) and AUSNUT 2011-13 (n = 5740). The proportion of Minimally Processed (MP), Processed Culinary Ingredients (PCI) Processed (P) and Ultra Processed (ULP) by AUSNUT food group and the overall proportion of the four processed food categories across AUSNUT 2007 and AUSNUT 2011-13 were calculated. Across the food composition databases, the overall proportions of foods classified as MP, PCI, P and ULP were 27%, 3%, 26% and 44% for AUSNUT 2007 and 38%, 2%, 24% and 36% for AUSNUT 2011-13. Although there was wide variation in the classifications of food processing within the food groups, approximately one-third of foodstuffs were classified as ULP food items across both the 2007 and 2011-13 AUSNUT databases. This Australian processed food classification system will allow researchers to easily quantify the contribution of processed foods within the Australian food supply to assist in assessing the nutritional quality of the dietary intake of population groups. © 2017 The British Dietetic Association Ltd.

  11. Process and domain specificity in regions engaged for face processing: an fMRI study of perceptual differentiation.

    PubMed

    Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E

    2012-12-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.

  12. Process- and Domain-Specificity in Regions Engaged for Face Processing: An fMRI Study of Perceptual Differentiation

    PubMed Central

    Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.

    2015-01-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402

  13. Achieving Continuous Manufacturing for Final Dosage Formation: Challenges and How to Meet Them May 20-21 2014 Continuous Manufacturing Symposium.

    PubMed

    Byrn, Stephen; Futran, Maricio; Thomas, Hayden; Jayjock, Eric; Maron, Nicola; Meyer, Robert F; Myerson, Allan S; Thien, Michael P; Trout, Bernhardt L

    2015-03-01

    We describe the key issues and possibilities for continuous final dosage formation, otherwise known as downstream processing or drug product manufacturing. A distinction is made between heterogeneous processing and homogeneous processing, the latter of which is expected to add more value to continuous manufacturing. We also give the key motivations for moving to continuous manufacturing, some of the exciting new technologies, and the barriers to implementation of continuous manufacturing. Continuous processing of heterogeneous blends is the natural first step in converting existing batch processes to continuous. In heterogeneous processing, there are discrete particles that can segregate, versus in homogeneous processing, components are blended and homogenized such that they do not segregate. Heterogeneous processing can incorporate technologies that are closer to existing technologies, where homogeneous processing necessitates the development and incorporation of new technologies. Homogeneous processing has the greatest potential for reaping the full rewards of continuous manufacturing, but it takes long-term vision and a more significant change in process development than heterogeneous processing. Heterogeneous processing has the detriment that, as the technologies are adopted rather than developed, there is a strong tendency to incorporate correction steps, what we call below "The Rube Goldberg Problem." Thus, although heterogeneous processing will likely play a major role in the near-term transformation of heterogeneous to continuous processing, it is expected that homogeneous processing is the next step that will follow. Specific action items for industry leaders are. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  14. Smart plants, smart models? On adaptive responses in vegetation-soil systems

    NASA Astrophysics Data System (ADS)

    van der Ploeg, Martine; Teuling, Ryan; van Dam, Nicole; de Rooij, Gerrit

    2015-04-01

    Hydrological models that will be able to cope with future precipitation and evapotranspiration regimes need a solid base describing the essence of the processes involved [1]. The essence of emerging patterns at large scales often originates from micro-behaviour in the soil-vegetation-atmosphere system. A complicating factor in capturing this behaviour is the constant interaction between vegetation and geology in which water plays a key role. The resilience of the coupled vegetation-soil system critically depends on its sensitivity to environmental changes. To assess root water uptake by plants in a changing soil environment, a direct indication of the amount of energy required by plants to take up water can be obtained by measuring the soil water potential in the vicinity of roots with polymer tensiometers [2]. In a lysimeter experiment with various levels of imposed water stress the polymer tensiometer data suggest maize roots regulate their root water uptake on the derivative of the soil water retention curve, rather than the amount of moisture alone. As a result of environmental changes vegetation may wither and die, or these changes may instead trigger gene adaptation. Constant exposure to environmental stresses, biotic or abiotic, influences plant physiology, gene adaptations, and flexibility in gene adaptation [3-7]. To investigate a possible relation between plant genotype, the plant stress hormone abscisic acid (ABA) and the soil water potential, a proof of principle experiment was set up with Solanum Dulcamare plants. The results showed a significant difference in ABA response between genotypes from a dry and a wet environment, and this response was also reflected in the root water uptake. Adaptive responses may have consequences for the way species are currently being treated in models (single plant to global scale). In particular, model parameters that control root water uptake and plant transpiration are generally assumed to be a property of the plant functional type. Assigning plant functional types does not allow for local plant adaptation to be reflected in the model parameters, nor does it allow for correlations that might exist between root parameters and soil type. [1] Seibert, J. 2000. Multi-criteria calibration of a conceptual runoff model using a genetic algorithm. Hydrology and Earth System Sciences 4(2): 215-224. [2] Van der Ploeg, M.J., H.P.A. Gooren, G. Bakker, C.W. Hoogendam, C. Huiskes, L.K. Koopal, H. Kruidhof and G.H. de Rooij. 2010. Polymer tensiometers with ceramic cones: performance in drying soils and comparison with water-filled tensiometers and time domain reflectometry. Hydrol. Earth Syst. Sci. 14: 1787-1799, doi: 10.5194/hess-14-1787-2010. [3] McClintock B. The significance of responses of the genome to challenge. Science 1984; 226: 792-801 [4] Ries G, Heller W, Puchta H, Sandermann H, Seldlitz HK, Hohn B. Elevated UV-B radiation reduces genome stability in plants. Nature 2000; 406: 98-101 [5] Lucht JM, Mauch-Mani B, Steiner H-Y, Metraux J-P, Ryals, J, Hohn B. Pathogen stress increases somatic recombination frequency in Arabidopsis. Nature Genet. 2002; 30: 311-314 [6] Kovalchuk I, Kovalchuk O, Kalck V., Boyko V, Filkowski J, Heinlein M, Hohn B. Pathogen-induced systemic plant signal triggers DNA rearrangements. Nature 2003; 423: 760-762 [7] Cullis C A. Mechanisms and control of rapid genomic changes in flax. Ann. Bot. (Lond.) 2005; 95: 201-206

  15. Connectivity research in Iceland - using scientific tools to establish sustainable water management strategies

    NASA Astrophysics Data System (ADS)

    Finger, David

    2015-04-01

    Since the ninth century when the first settlers arrived in Iceland the island has undergone deforestation and subsequent vegetation degradation and soil erosion. Almost the entire birch forest and woodland, which originally covered ~ 25% of the nation, have been deforested through wood cutting and overgrazing. Consequently, soil erosion seriously affects over 40% of the country. During the last 50 years extensive drainage of wetlands has taken place. Furthermore, about 75% of Iceland electricity production comes from hydropower plants, constructed along the main rivers. Along with seismic and volcanic activities the above mentioned anthropogenic impacts continuously altered the hydro-geomorphic connectivity in many parts of the island. In the framework of ongoing efforts to restore ecosystems and their services in Iceland a thorough understanding of the hydro-geomorphic processes is essential. Field observations and numerical models are crucial tools to adopt appropriate management strategies and help decision makers establish sustainable governance strategies. Sediment transport models have been used in the past to investigate the impacts of hydropower dams on sediment transport in downstream rivers (Finger et al., 2006). Hydropower operations alter the turbidity dynamics in downstream freshwater systems, affecting visibility and light penetration into the water, leading to significant changes in primary production (Finger et al., 2007a). Overall, the interruption of connectivity by physical obstructions can affect the entire food chain, hampering the fishing yields in downstream waters (Finger et al., 2007b). In other locations hydraulic connectivity through retreating glaciers assures water transfer from upstream to downstream areas. The drastically retreat of glaciers can raise concerns of future water availability in remote mountain areas (Finger et al., 2013). Furthermore, the drastic reduction of glacier mass also jeopardizes the water availability for hydropower production (Finger et al., 2012). All these factors reveal the importance of a thorough understanding of hydro-geomorphic connectivity to adopt adequate water management strategies. The presentation will conclude by outlining how the above presented methods can be applied to Icelandic study sites to help water managers and policy makers to adopt resilient based policies regarding the challenges of future climate change impacts. References: Finger, D., M. Schmid, and A. Wuest (2006), Effects of upstream hydropower operation on riverine particle transport and turbidity in downstream lakes, Water Resour. Res., 42(8), doi: 10.1029/2005wr004751. Finger, D., P. Bossard, M. Schmid, L. Jaun, B. Müller, D. Steiner, E. Schaffer, M. Zeh, and A. Wüest (2007a), Effects of alpine hydropower operations on primary production in a downstream lake, Aquatic Sciences, 69(2), 240-256, doi: 10.1007/s00027-007-0873-6. Finger, D., M. Schmid, and A. Wüest (2007b), Comparing effects of oligotrophication and upstream hydropower dams on plankton and productivity in perialpine lakes, Water Resour. Res., 43(12), W12404, doi: 10.1029/2007WR005868. Finger, D., G. Heinrich, A. Gobiet, and A. Bauder (2012), Projections of future water resources and their uncertainty in a glacierized catchment in the Swiss Alps and the subsequent effects on hydropower production during the 21st century, Water Resour. Res., 48, doi: 10.1029/2011wr010733, W02521. Finger, D., A. Hugentobler, M. Huss, A. Voinesco, H. R. Wernli, D. Fischer, E. Weber, P.-Y. Jeannin, M. Kauzlaric, A. Wirz, T. Vennemann, F. Hüsler, B. Schädler, and R. Weingartner (2013), Identification of glacial melt water runoff in a karstic environment and its implication for present and future water availability, Hydrol. Earth Syst. Sci., 17, 3261-3277, doi: 10.5194/hess-17-3261-2013.

  16. The effect of plant water stress approach on the modelled energy-, water and carbon balance for Mediterranean vegetation; implications for (agro)meteorological applications.

    NASA Astrophysics Data System (ADS)

    Verhoef, Anne; Egea, Gregorio; Garrigues, Sebastien; Vidale, Pier Luigi; Balan Sarojini, Beena

    2017-04-01

    Current land surface schemes in many crop, weather and climate models make use of the coupled photosynthesis-stomatal conductance (A-gs) models of plant function to determine the transpiration flux and gross primary productivity. Vegetation exchange is controlled by many environmental factors, and soil moisture control on root water uptake and stomatal function is a primary pathway for feedbacks in sub-tropical to temperate ecosystems. Representations of the above process of soil moisture control on plant function (often referred to as a 'beta' factor) vary among models. This matters because the simulated energy, water and carbon balances are very sensitive to the representation of water stress in these models. Building on Egea et al. (2011) and Verhoef and Egea (2014), we tested a range of 'beta' approaches in a leaf-level A-gs model (compatible with models such as JULES, CHTESSEL, ISBA, CLM), as well as some beta-approaches borrowed from the agronomic, and plant physiological communities (a combined soil-plant hydraulic approach, see Verhoef and Egea, 2014). Root zone soil moisture was allowed to limit plant function via individual routes (via CO2 assimilation, stomatal conductance, or mesophyll conductance) as well as combinations of that. The simulations were conducted for a typical Mediterranean field site (Avignon, France; Garrigues et al., 2015) which provides 14 years of near-continuous measurements of soil moisture and atmospheric driving data. Daytime (8-16 hrs local time) data between April-September were used. This allowed a broad range of atmospheric and soil moisture/vegetation states to be explored. A number of crops and tree types were investigated in this way. We evaluated the effect of choice of beta-function for Mediterranean climates in relation to stomatal conductance, transpiration, photosynthesis, and leaf surface temperature. We also studied the implications for a range of widely used agro-/micro-meteorological indicators such as Bowen ratio and the omega decoupling coefficient (which quantifies the degree of the aerodynamic coupling between a vegetated surface and the atmospheric boundary layer; Jacobs and de Bruin, 1992); and applications (e.g. the use of surface temperature based water stress indices). Results showed that choice of 'beta' function has far-reaching consequences. For certain widely used 'beta'-models the predicted key fluxes and state variables, predominantly compared using kernel density functions, showed considerable 'clumping' around narrow data ranges. This will have implications for the strength of land-surface feedback predicted by these models, and for any agrometeorological applications they are used for. Recommendations as to the most suitable 'beta'-functions, and related parameter sets, for Mediterranean climates were made. References Garrigues, S. et al. (2015) Evaluation of land surface model simulations of evapotranspiration over a 12-year crop succession: impact of soil hydraulic and vegetation properties, Hydrol. Earth Syst. Sci., 19, 3109-3131; Jacobs, C. M. J. and de Bruin, H. A. R. (1992) The sensitivity of regional transpiration to land-surface characteristics: Significance of feedback, J. Climate, 5(7), 683-698; Verhoef, A. and Egea, G. (2014) Agriculture and Forest Meteorology, 191, 22-32; Egea, G., Verhoef, A., and Vidale, P. L. (2011) Agricultural and Forest Meteorology, 151, 1370-1384

  17. Setting Up a Sentinel 1 Based Soil Moisture - Data Assimilation System for Flash Flood Risk Mitigation

    NASA Astrophysics Data System (ADS)

    Cenci, Luca; Pulvirenti, Luca; Boni, Giorgio; Chini, Marco; Matgen, Patrick; Gabellani, Simone; Squicciarino, Giuseppe; Pierdicca, Nazzareno

    2017-04-01

    Several studies have shown that the assimilation of satellite-derived soil moisture products (SM-DA) within hydrological modelling is able to reduce the uncertainty of discharge predictions. This can be exploited for improving early warning systems (EWS) and it is thus particularly useful for flash flood risk mitigation (Cenci et al., 2016a). The objective of this research was to evaluate the potentialities of an advanced SM-DA system based on the assimilation of synthetic aperture radar (SAR) observations derived from Sentinel 1 (S1) acquisitions. A time-continuous, spatially-distributed, physically-based hydrological model was used: Continuum (Silvestro et al., 2013). The latter is currently exploited for civil protection activities in Italy, both at national and at regional scale. Therefore, its adoption allows for a better understanding of the real potentialities of the aforementioned SM-DA system for improving EWS. The novelty of this research consisted in the use of S1-derived SM products obtained by using a multitemporal retrieval algorithm (Cenci et al., 2016b) in which the correction of the vegetation effect was obtained by means of both SAR (Cosmo-SkyMed) and optical (Landsat) images. The maps were characterised by a comparatively higher spatial/lower temporal resolution (respectively, 100 m and 12 days) w.r.t. maps obtained from commonly used microwave sensors for such applications (e.g. the Advanced SCATterometer, ASCAT). The experiment was carried out in the period October 2014 - February 2015 in an exemplifying Mediterranean catchment prone to flash floods: the Orba Basin (Italy). The Nudging assimilation scheme was chosen for its computational efficiency, particularly useful for operational applications. The impact of the assimilation was evaluated by comparing simulated and observed discharge values. In particular, it was analysed the impact of the assimilation on higher flows. Results were compared with those obtained by assimilating an ASCAT-derived SM product (H08) that can be considered at high spatial resolution (1 km) for hydrological applications and high temporal resolution (36 h) (Wagner et al., 2013). Findings revealed the potentialities of a S1-based SM-DA system for improving discharge predictions, especially of higher flows, and suggested the more appropriate pre-processing techniques to apply to S1 data before the assimilation. The comparison with H08 highlighted the importance of the temporal resolution of the observations. Results are promising but further research is needed before the actual implementation of the aforementioned S1-based SM-DA system for operational applications. References - Cenci L., et al.: Assimilation of H-SAF Soil Moisture Products for Flash Flood Early Warning Systems. Case Study: Mediterranean Catchments, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.}, 9(12), 5634-5646, doi:10.1109/JSTARS.2016.2598475, 2016a. - Cenci L., et al.: Satellite Soil Moisture Assimilation: Preliminary Assessment of the Sentinel 1 Potentialities, 2016 IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), Beijing, 3098-3101, doi:10.1109/IGARSS.2016.7729801, 2016b. - Silvestro F., et al.: Exploiting Remote Sensing Land Surface Temperature in Distributed Hydrological Modelling: the Example of the Continuum Model, Hydrol. Earth Syst. Sci., 17(1), 39-62, doi:10.5194/hess-17-39-2013, 2013. - Wagner W., et al.: The ASCAT Soil Moisture Product: A Review of its Specifications, Validation Results, and Emerging Applications, Meteorol. Zeitschrift, 22(1), 5-33, doi:10.1127/0941-2948/2013/0399, 2013.

  18. Coupled prediction of flash flood response and debris flow occurrence in an alpine basin

    NASA Astrophysics Data System (ADS)

    Amponsah, William

    2015-04-01

    Coupled prediction of flash flood response and debris flow occurrence in an alpine basin Author(s): William Amponsah1, E.I. Nikolopoulos2, Lorenzo Marchi1, Roberto Dinale4, Francesco Marra3,Davide Zoccatelli2 , Marco Borga2 Affiliation(s): 1CNR - IRPI, Corso Stati Uniti 4, 35127, Padova, ITALY, 2Department of Land, Environment, Agriculture and Forestry, University of Padova,VialeDell'Università 16, 35020, Legnaro PD, ITALY 3Department of Geography, Hebrew University of Jerusalem, ISRAEL 4Ufficio Idrografico, Provincia Autonoma di Bolzano, Bolzano, Italy This contribution examines the main hydrologic and morphologic metrics responsible for widespread triggering of debris-flows associated with flash flood occurrences in headwater alpine catchments.To achieve this objective, we investigate the precipitation forcing, hydrologic responses and landslides and debris-flow occurrences that prevailed during the August 4-5, 2012 extreme flash flood on the 140 km2 Vizze basin in the Eastern Alps of Italy. An intensive post-event survey was carried out a few days after the flood. This included the surveys of cross-sectional geometry and flood marks for the estimation of the peak discharges at multiple river sections and of the initiation and deposition areas of several debris flows. Rainfall estimates are based on careful analysis of weather radar observations and raingauge data. These data and observations permitted the implementation and calibration of a spatially distributed hydrological model, which was used to derive simulated flood hydrographs in 58 tributaries of the Vizze basin. Of these, 33 generated debris-flows, with area ranging from 0.02 km2 to 10 km2, with an average of 1.5 km2. With 130 mm peak event rainfall and a duration of 4 hours (with a max intensity of 90 mm h-1 for 10 min), model-simulated unit peak discharges range from 4 m3 s-1 km-2for elementary catchments up to 10 km2 to 2 m3 s-1 km-2 for catchments in the range of 50 - 100 km2. These are very high values when considering the local runoff regime. We used a threshold criterion based on past works (Tognaccaet al., 2000; Berti and Simoni, 2005; Gregoretti and Dalla Fontana, 2008) to identify tributaries associated to debris flow events. The threshold is defined for each channel grid as a function of the simulated unit width peak flow, of the local channel bed slope and of the mean grain size. Based on assumptions concerning the mean grain size and given the distribution of the threshold values over the river network, we derive a catchment scale threshold index for the tributaries. The results show that the index has considerable skill in identifying the catchments where the studied rainstorm caused debris-flows. Berti, M. andA.Simoni, 2005: Experimental evidences and numerical modelling of debris flow initiated by channel runoff. Landslides, 2 (3), 171-182. Gregoretti, C. and G. Dalla Fontana, 2008:The triggering of debris flow due to channel-bed failure in some alpine headwater basins of the Dolomites: analyses of critical runoff. Hydrol. Process. 22, 2248-2263. Tognacca C., G.R. Bezzola andH.E.Minor, 2000: Threshold criterion fodebrisflow initiation due to channel bed failure. In Proceedings of the Second International Conference on Debris Flow Hazards Mitigation Taipei,August, Wiezczorek, Naeser (eds): 89-97.

  19. Effective Two-way Communication of Environmental Hazards: Understanding Public Perception in the UK

    NASA Astrophysics Data System (ADS)

    Lorono-Leturiondo, Maria; O'Hare, Paul; Cook, Simon; Hoon, Stephen R.; Illingworth, Sam

    2017-04-01

    Climate change intensified hazards, such as floods and landslides, require exploring renewed ways of protecting at-risk communities (World Economic Forum 2016). Scientists are being encouraged to explore new pathways to work closely with affected communities in search of experiential knowledge that is able to complement and extend scientific knowledge (see for instance Whatmore and Landström 2011 and Höpner et al. 2010). Effective two-way communication of environmental hazards is, however, a challenge. Besides considering factors such as the purpose of communication, or the characteristics of the different formats; effective communication has to carefully acknowledge the personal framework of the individuals involved. Existing experiences, values, beliefs, and needs are critical determinants of the way they perceive and relate to these hazards, and in turn, of the communication process in which they are involved (Longnecker 2016 and Gibson et al. 2016). Our study builds on the need to analyze how the public perceives environmental hazards in order to establish forms of communication that work. Here we present early findings of a survey analysing the UK public's perception and outline how survey results can guide more effective two-way communication practices between scientists and affected communities. We explore the perception of environmental hazards in terms of how informed and concerned the public is, as well as how much ownership they claim over these phenomena. In order to gain a more accurate image, we study environmental hazards in relation to other risks threatening the UK, such as large-scale involuntary migration or unemployment (World Economic Forum 2016, Bord et al. 1998). We also explore information consumption in relation to environmental hazards and the public's involvement in advancing knowledge. All these questions are accompanied by an extensive demographics section that allows us to ascertain how the context or environment in which an individual is embedded influences perception (Longnecker 2016). This study also explores survey responses of geoscientists, or scientists working within the field of environmental hazards, as the baseline with which to compare public perception. In doing this, we aim to push for new formats of communication that are able to encompass knowledge and perception differences, as well as to draw attention to the need for a redistribution of expertise. References Bord, R.J., Fisher, A., Robert, E.O., 1998. Public perceptions of global warming: United States and international perspectives. Climate Research 11, 75-84. Gibson, H., Stewart, I.S., Pahl, S., Stokes, A., 2016. A "mental models" approach to the communication of subsurface hydrology and hazards. Hydrol. Earth Syst. Sci. 20, 1737-1749. doi:10.5194/hess-20-1737-2016 Höppner, C., Buchecker, M., Bründl, M., 2010. Risk communication and natural hazards. CapHaz project. Birmensdorf, Switzerland. Longnecker, N., 2016. An integrated model of science communication — More than providing evidence [WWW Document]. JCOM - The Journal of Science Communication. Whatmore, S.J., Landström, C., 2011. Flood apprentices: an exercise in making things public. Economy and Society 40, 582-610. doi:10.1080/03085147.2011.602540 World Economic Forum. 2016. "The Global Risks Report 2016." World Economic Forum. Accessed November 9, 2016. https://www.weforum.org/reports/the-global-risks-report-2016/.

  20. An Analysis of the Air Force Government Operated Civil Engineering Supply Store Logistic System: How Can It Be Improved?

    DTIC Science & Technology

    1990-09-01

    6 Logistics Systems ............ 7 GOCESS Operation . . . . . . . ..... 9 Work Order Processing . . . . ... 12 Job Order Processing . . . . . . . . . . 14...orders and job orders to the Material Control Section will be discussed separately. Work Order Processing . Figure 2 illustrates typical WO processing...logistics function. The JO processing is similar. Job Order Processing . Figure 3 illustrates typical JO processing in a GOCESS operation. As with WOs, this

  1. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  2. Data processing system for the Sneg-2MP experiment

    NASA Technical Reports Server (NTRS)

    Gavrilova, Y. A.

    1980-01-01

    The data processing system for scientific experiments on stations of the "Prognoz" type provides for the processing sequence to be broken down into a number of consecutive stages: preliminary processing, primary processing, secondary processing. The tasks of each data processing stage are examined for an experiment designed to study gamma flashes of galactic origin and solar flares lasting from several minutes to seconds in the 20 kev to 1000 kev energy range.

  3. General RMP Guidance - Appendix D: OSHA Guidance on PSM

    EPA Pesticide Factsheets

    OSHA's Process Safety Management (PSM) Guidance on providing complete and accurate written information concerning process chemicals, process technology, and process equipment; including process hazard analysis and material safety data sheets.

  4. Elaboration Likelihood and the Counseling Process: The Role of Affect.

    ERIC Educational Resources Information Center

    Stoltenberg, Cal D.; And Others

    The role of affect in counseling has been examined from several orientations. The depth of processing model views the efficiency of information processing as a function of the extent to which the information is processed. The notion of cognitive processing capacity states that processing information at deeper levels engages more of one's limited…

  5. 5 CFR 582.202 - Service of legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Service of legal process. 582.202 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.202 Service of legal process. (a) A person using this part shall serve interrogatories and legal process on the agent to receive process as...

  6. 5 CFR 582.202 - Service of legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Service of legal process. 582.202 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.202 Service of legal process. (a) A person using this part shall serve interrogatories and legal process on the agent to receive process as...

  7. Information Processing Concepts: A Cure for "Technofright." Information Processing in the Electronic Office. Part 1: Concepts.

    ERIC Educational Resources Information Center

    Popyk, Marilyn K.

    1986-01-01

    Discusses the new automated office and its six major technologies (data processing, word processing, graphics, image, voice, and networking), the information processing cycle (input, processing, output, distribution/communication, and storage and retrieval), ergonomics, and ways to expand office education classes (versus class instruction). (CT)

  8. Facial Speech Gestures: The Relation between Visual Speech Processing, Phonological Awareness, and Developmental Dyslexia in 10-Year-Olds

    ERIC Educational Resources Information Center

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.

    2016-01-01

    Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…

  9. 40 CFR 65.62 - Process vent group determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., or Group 2B) for each process vent. Group 1 process vents require control, and Group 2A and 2B... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Process vent group determination. 65... (CONTINUED) CONSOLIDATED FEDERAL AIR RULE Process Vents § 65.62 Process vent group determination. (a) Group...

  10. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .../or Table 9 compounds are similar and often identical. (3) Biological treatment processes. Biological treatment processes in compliance with this section may be either open or closed biological treatment processes as defined in § 63.111. An open biological treatment process in compliance with this section need...

  11. 5 CFR 581.202 - Service of process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Service of process. 581.202 Section 581... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Service of Process § 581.202 Service of process. (a) A... facilitate proper service of process on its designated agent(s). If legal process is not directed to any...

  12. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  13. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  14. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  15. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  16. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  17. Processing Depth, Elaboration of Encoding, Memory Stores, and Expended Processing Capacity.

    ERIC Educational Resources Information Center

    Eysenck, Michael W.; Eysenck, M. Christine

    1979-01-01

    The effects of several factors on expended processing capacity were measured. Expended processing capacity was greater when information was retrieved from secondary memory than from primary memory, when processing was of a deep, semantic nature than when it was shallow and physical, and when processing was more elaborate. (Author/GDC)

  18. Speed isn’t everything: Complex processing speed measures mask individual differences and developmental changes in executive control

    PubMed Central

    Cepeda, Nicholas J.; Blackwell, Katharine A.; Munakata, Yuko

    2012-01-01

    The rate at which people process information appears to influence many aspects of cognition across the lifespan. However, many commonly accepted measures of “processing speed” may require goal maintenance, manipulation of information in working memory, and decision-making, blurring the distinction between processing speed and executive control and resulting in overestimation of processing-speed contributions to cognition. This concern may apply particularly to studies of developmental change, as even seemingly simple processing speed measures may require executive processes to keep children and older adults on task. We report two new studies and a re-analysis of a published study, testing predictions about how different processing speed measures influence conclusions about executive control across the life span. We find that the choice of processing speed measure affects the relationship observed between processing speed and executive control, in a manner that changes with age, and that choice of processing speed measure affects conclusions about development and the relationship among executive control measures. Implications for understanding processing speed, executive control, and their development are discussed. PMID:23432836

  19. The impact of working memory and the “process of process modelling” on model quality: Investigating experienced versus inexperienced modellers

    NASA Astrophysics Data System (ADS)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R.; Weber, Barbara

    2016-05-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  20. A new class of random processes with application to helicopter noise

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.; Miamee, A. G.

    1989-01-01

    The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x) (omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.

  1. The impact of working memory and the “process of process modelling” on model quality: Investigating experienced versus inexperienced modellers

    PubMed Central

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R.; Weber, Barbara

    2016-01-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling. PMID:27157858

  2. A new class of random processes with application to helicopter noise

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.; Miamee, A. G.

    1989-01-01

    The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x)(omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.

  3. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    PubMed

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  4. Rapid Automatized Naming in Children with Dyslexia: Is Inhibitory Control Involved?

    PubMed

    Bexkens, Anika; van den Wildenberg, Wery P M; Tijms, Jurgen

    2015-08-01

    Rapid automatized naming (RAN) is widely seen as an important indicator of dyslexia. The nature of the cognitive processes involved in rapid naming is however still a topic of controversy. We hypothesized that in addition to the involvement of phonological processes and processing speed, RAN is a function of inhibition processes, in particular of interference control. A total 86 children with dyslexia and 31 normal readers were recruited. Our results revealed that in addition to phonological processing and processing speed, interference control predicts rapid naming in dyslexia, but in contrast to these other two cognitive processes, inhibition is not significantly associated with their reading and spelling skills. After variance in reading and spelling associated with processing speed, interference control and phonological processing was partialled out, naming speed was no longer consistently associated with the reading and spelling skills of children with dyslexia. Finally, dyslexic children differed from normal readers on naming speed, literacy skills, phonological processing and processing speed, but not on inhibition processes. Both theoretical and clinical interpretations of these results are discussed. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Feasibility of using continuous chromatography in downstream processing: Comparison of costs and product quality for a hybrid process vs. a conventional batch process.

    PubMed

    Ötes, Ozan; Flato, Hendrik; Winderl, Johannes; Hubbuch, Jürgen; Capito, Florian

    2017-10-10

    The protein A capture step is the main cost-driver in downstream processing, with high attrition costs especially when using protein A resin not until end of resin lifetime. Here we describe a feasibility study, transferring a batch downstream process to a hybrid process, aimed at replacing batch protein A capture chromatography with a continuous capture step, while leaving the polishing steps unchanged to minimize required process adaptations compared to a batch process. 35g of antibody were purified using the hybrid approach, resulting in comparable product quality and step yield compared to the batch process. Productivity for the protein A step could be increased up to 420%, reducing buffer amounts by 30-40% and showing robustness for at least 48h continuous run time. Additionally, to enable its potential application in a clinical trial manufacturing environment cost of goods were compared for the protein A step between hybrid process and batch process, showing a 300% cost reduction, depending on processed volumes and batch cycles. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Non-Conscious Perception of Emotions in Psychiatric Disorders: The Unsolved Puzzle of Psychopathology.

    PubMed

    Lee, Seung A; Kim, Chai-Youn; Lee, Seung-Hwan

    2016-03-01

    Psychophysiological and functional neuroimaging studies have frequently and consistently shown that emotional information can be processed outside of the conscious awareness. Non-conscious processing comprises automatic, uncontrolled, and fast processing that occurs without subjective awareness. However, how such non-conscious emotional processing occurs in patients with various psychiatric disorders requires further examination. In this article, we reviewed and discussed previous studies on the non-conscious emotional processing in patients diagnosed with anxiety disorder, schizophrenia, bipolar disorder, and depression, to further understand how non-conscious emotional processing varies across these psychiatric disorders. Although the symptom profile of each disorder does not often overlap with one another, these patients commonly show abnormal emotional processing based on the pathology of their mood and cognitive function. This indicates that the observed abnormalities of emotional processing in certain social interactions may derive from a biased mood or cognition process that precedes consciously controlled and voluntary processes. Since preconscious forms of emotional processing appear to have a major effect on behaviour and cognition in patients with these disorders, further investigation is required to understand these processes and their impact on patient pathology.

  7. Empirical evaluation of the Process Overview Measure for assessing situation awareness in process plants.

    PubMed

    Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd

    2016-03-01

    The Process Overview Measure is a query-based measure developed to assess operator situation awareness (SA) from monitoring process plants. A companion paper describes how the measure has been developed according to process plant properties and operator cognitive work. The Process Overview Measure demonstrated practicality, sensitivity, validity and reliability in two full-scope simulator experiments investigating dramatically different operational concepts. Practicality was assessed based on qualitative feedback of participants and researchers. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA in full-scope simulator settings based on data collected on process experts. Thus, full-scope simulator studies can employ the Process Overview Measure to reveal the impact of new control room technology and operational concepts on monitoring process plants. Practitioner Summary: The Process Overview Measure is a query-based measure that demonstrated practicality, sensitivity, validity and reliability for assessing operator situation awareness (SA) from monitoring process plants in representative settings.

  8. A Framework for Business Process Change Requirements Analysis

    NASA Astrophysics Data System (ADS)

    Grover, Varun; Otim, Samuel

    The ability to quickly and continually adapt business processes to accommodate evolving requirements and opportunities is critical for success in competitive environments. Without appropriate linkage between redesign decisions and strategic inputs, identifying processes that need to be modified will be difficult. In this paper, we draw attention to the analysis of business process change requirements in support of process change initiatives. Business process redesign is a multifaceted phenomenon involving processes, organizational structure, management systems, human resource architecture, and many other aspects of organizational life. To be successful, the business process initiative should focus not only on identifying the processes to be redesigned, but also pay attention to various enablers of change. Above all, a framework is just a blueprint; management must lead change. We hope our modest contribution will draw attention to the broader framing of requirements for business process change.

  9. Process Analytical Technology (PAT): batch-to-batch reproducibility of fermentation processes by robust process operational design and control.

    PubMed

    Gnoth, S; Jenzsch, M; Simutis, R; Lübbert, A

    2007-10-31

    The Process Analytical Technology (PAT) initiative of the FDA is a reaction on the increasing discrepancy between current possibilities in process supervision and control of pharmaceutical production processes and its current application in industrial manufacturing processes. With rigid approval practices based on standard operational procedures, adaptations of production reactors towards the state of the art were more or less inhibited for long years. Now PAT paves the way for continuous process and product improvements through improved process supervision based on knowledge-based data analysis, "Quality-by-Design"-concepts, and, finally, through feedback control. Examples of up-to-date implementations of this concept are presented. They are taken from one key group of processes in recombinant pharmaceutical protein manufacturing, the cultivations of genetically modified Escherichia coli bacteria.

  10. When teams shift among processes: insights from simulation and optimization.

    PubMed

    Kennedy, Deanna M; McComb, Sara A

    2014-09-01

    This article introduces process shifts to study the temporal interplay among transition and action processes espoused in the recurring phase model proposed by Marks, Mathieu, and Zacarro (2001). Process shifts are those points in time when teams complete a focal process and change to another process. By using team communication patterns to measure process shifts, this research explores (a) when teams shift among different transition processes and initiate action processes and (b) the potential of different interventions, such as communication directives, to manipulate process shift timing and order and, ultimately, team performance. Virtual experiments are employed to compare data from observed laboratory teams not receiving interventions, simulated teams receiving interventions, and optimal simulated teams generated using genetic algorithm procedures. Our results offer insights about the potential for different interventions to affect team performance. Moreover, certain interventions may promote discussions about key issues (e.g., tactical strategies) and facilitate shifting among transition processes in a manner that emulates optimal simulated teams' communication patterns. Thus, we contribute to theory regarding team processes in 2 important ways. First, we present process shifts as a way to explore the timing of when teams shift from transition to action processes. Second, we use virtual experimentation to identify those interventions with the greatest potential to affect performance by changing when teams shift among processes. Additionally, we employ computational methods including neural networks, simulation, and optimization, thereby demonstrating their applicability in conducting team research. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. Nitrous oxide and methane emissions from different treatment processes in full-scale municipal wastewater treatment plants.

    PubMed

    Rena, Y G; Wang, J H; Li, H F; Zhang, J; Qi, P Y; Hu, Z

    2013-01-01

    Nitrous oxide (N2O) and methane (CH4) are two important greenhouse gases (GHG) emitted from biological nutrient removal (BNR) processes in municipal wastewater treatment plants (WWTP). In this study, three typical biological wastewater treatment processes were studied in WWTP of Northern China: pre-anaerobic carrousel oxidation ditch (A+OD) process, pre-anoxic anaerobic-anoxic-oxic (A-A/ A/O) process and reverse anaerobic-anoxic-oxic (r-A/ A/O) process. The N2O and CH4 emissions from these three different processes were measured in every processing unit of each WWTP. Results showed that N2O and CH4 were mainly discharged during the nitrification/denitrification process and the anaerobic/anoxic treatment process, respectively and the amounts of their formation and release were significantly influenced by different BNR processes implemented in these WWTP. The N2O conversion ratio of r-A/ A/O process was the lowest among the three WWTP, which were 10.9% and 18.6% lower than that of A-A/A/O process and A+OD process, respectively. Similarly, the CH4 conversion ratio of r-A/ A/O process was the lowest among the three WWTP, which were 89. I% and 80.8% lower than that of A-A/ A/O process and A+OD process, respectively. The factors influencing N2O and CH4 formation and emission in the three WWTP were investigated to explain the difference between these processes. The nitrite concentration and oxidation-reduction potential (ORP) value were found to be the dominant influencing factors affecting N2O and CH4 production, respectively. The flow-based emission factors of N2O and CH4 of the WWTP were figured out for better quantification of GHG emissions and further technical assessments of mitigation options.

  12. Effects of children's working memory capacity and processing speed on their sentence imitation performance.

    PubMed

    Poll, Gerard H; Miller, Carol A; Mainela-Arnold, Elina; Adams, Katharine Donnelly; Misra, Maya; Park, Ji Sook

    2013-01-01

    More limited working memory capacity and slower processing for language and cognitive tasks are characteristics of many children with language difficulties. Individual differences in processing speed have not consistently been found to predict language ability or severity of language impairment. There are conflicting views on whether working memory and processing speed are integrated or separable abilities. To evaluate four models for the relations of individual differences in children's processing speed and working memory capacity in sentence imitation. The models considered whether working memory and processing speed are integrated or separable, as well as the effect of the number of operations required per sentence. The role of working memory as a mediator of the effect of processing speed on sentence imitation was also evaluated. Forty-six children with varied language and reading abilities imitated sentences. Working memory was measured with the Competing Language Processing Task (CLPT), and processing speed was measured with a composite of truth-value judgment and rapid automatized naming tasks. Mixed-effects ordinal regression models evaluated the CLPT and processing speed as predictors of sentence imitation item scores. A single mediator model evaluated working memory as a mediator of the effect of processing speed on sentence imitation total scores. Working memory was a reliable predictor of sentence imitation accuracy, but processing speed predicted sentence imitation only as a component of a processing speed by number of operations interaction. Processing speed predicted working memory capacity, and there was evidence that working memory acted as a mediator of the effect of processing speed on sentence imitation accuracy. The findings support a refined view of working memory and processing speed as separable factors in children's sentence imitation performance. Processing speed does not independently explain sentence imitation accuracy for all sentence types, but contributes when the task requires more mental operations. Processing speed also has an indirect effect on sentence imitation by contributing to working memory capacity. © 2013 Royal College of Speech and Language Therapists.

  13. Q-marker based strategy for CMC research of Chinese medicine: A case study of Panax Notoginseng saponins.

    PubMed

    Zhong, Yi; Zhu, Jieqiang; Yang, Zhenzhong; Shao, Qing; Fan, Xiaohui; Cheng, Yiyu

    2018-01-31

    To ensure pharmaceutical quality, chemistry, manufacturing and control (CMC) research is essential. However, due to the inherent complexity of Chinese medicine (CM), CMC study of CM remains a great challenge for academia, industry, and regulatory agencies. Recently, quality-marker (Q-marker) was proposed to establish quality standards or quality analysis approaches of Chinese medicine, which sheds a light on Chinese medicine's CMC study. Here manufacture processes of Panax Notoginseng Saponins (PNS) is taken as a case study and the present work is to establish a Q-marker based research strategy for CMC of Chinese medicine. The Q-markers of Panax Notoginseng Saponins (PNS) is selected and established by integrating chemical profile with pharmacological activities. Then, the key processes of PNS manufacturing are identified by material flow analysis. Furthermore, modeling algorithms are employed to explore the relationship between Q-markers and critical process parameters (CPPs) of the key processes. At last, CPPs of the key processes are optimized in order to improving the process efficiency. Among the 97 identified compounds, Notoginsenoside R 1 , ginsenoside Rg 1 , Re, Rb 1 and Rd are selected as the Q-markers of PNS. Our analysis on PNS manufacturing show the extraction process and column chromatography process are the key processes. With the CPPs of each process as the inputs and Q-markers' contents as the outputs, two process prediction models are built separately for the extraction process and column chromatography process of Panax notoginseng, which both possess good prediction ability. Based on the efficiency models of extraction process and column chromatography process we constructed, the optimal CPPs of both processes are calculated. Our results show that the Q-markers derived from CMC research strategy can be applied to analyze the manufacturing processes of Chinese medicine to assure product's quality and promote key processes' efficiency simultaneously. Copyright © 2018 Elsevier GmbH. All rights reserved.

  14. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS) data. Application and comparative study of selected tools

    PubMed Central

    2012-01-01

    Background Gas chromatography–mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. Results PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS). Conclusions PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface. PMID:22647087

  15. The Research Process on Converter Steelmaking Process by Using Limestone

    NASA Astrophysics Data System (ADS)

    Tang, Biao; Li, Xing-yi; Cheng, Han-chi; Wang, Jing; Zhang, Yun-long

    2017-08-01

    Compared with traditional converter steelmaking process, steelmaking process with limestone uses limestone to replace lime partly. A lot of researchers have studied about the new steelmaking process. There are much related research about material balance calculation, the behaviour of limestone in the slag, limestone powder injection in converter and application of limestone in iron and steel enterprises. The results show that the surplus heat of converter can meet the need of the limestone calcination, and the new process can reduce the steelmaking process energy loss in the whole steelmaking process, reduce carbon dioxide emissions, and improve the quality of the gas.

  16. Gas processing handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-04-01

    Brief details are given of processes including: BGC-Lurgi slagging gasification, COGAS, Exxon catalytic coal gasification, FW-Stoic 2-stage, GI two stage, HYGAS, Koppers-Totzek, Lurgi pressure gasification, Saarberg-Otto, Shell, Texaco, U-Gas, W-D.IGI, Wellman-Galusha, Westinghouse, and Winkler coal gasification processes; the Rectisol process; the Catacarb and the Benfield processes for removing CO/SUB/2, H/SUB/2s and COS from gases produced by the partial oxidation of coal; the selectamine DD, Selexol solvent, and Sulfinol gas cleaning processes; the sulphur-tolerant shift (SSK) process; and the Super-meth process for the production of high-Btu gas from synthesis gas.

  17. Working on the Boundaries: Philosophies and Practices of the Design Process

    NASA Technical Reports Server (NTRS)

    Ryan, R.; Blair, J.; Townsend, J.; Verderaime, V.

    1996-01-01

    While systems engineering process is a program formal management technique and contractually binding, the design process is the informal practice of achieving the design project requirements throughout all design phases of the systems engineering process. The design process and organization are systems and component dependent. Informal reviews include technical information meetings and concurrent engineering sessions, and formal technical discipline reviews are conducted through the systems engineering process. This paper discusses and references major philosophical principles in the design process, identifies its role in interacting systems and disciplines analyses and integrations, and illustrates the process application in experienced aerostructural designs.

  18. Chemical processing of lunar materials

    NASA Technical Reports Server (NTRS)

    Criswell, D. R.; Waldron, R. D.

    1979-01-01

    The paper highlights recent work on the general problem of processing lunar materials. The discussion covers lunar source materials, refined products, motivations for using lunar materials, and general considerations for a lunar or space processing plant. Attention is given to chemical processing through various techniques, including electrolysis of molten silicates, carbothermic/silicothermic reduction, carbo-chlorination process, NaOH basic-leach process, and HF acid-leach process. Several options for chemical processing of lunar materials are well within the state of the art of applied chemistry and chemical engineering to begin development based on the extensive knowledge of lunar materials.

  19. Coordination and organization of security software process for power information application environment

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    2017-09-01

    As an important part of software engineering, the software process decides the success or failure of software product. The design and development feature of security software process is discussed, so is the necessity and the present significance of using such process. Coordinating the function software, the process for security software and its testing are deeply discussed. The process includes requirement analysis, design, coding, debug and testing, submission and maintenance. In each process, the paper proposed the subprocesses to support software security. As an example, the paper introduces the above process into the power information platform.

  20. Sensor-based atomic layer deposition for rapid process learning and enhanced manufacturability

    NASA Astrophysics Data System (ADS)

    Lei, Wei

    In the search for sensor based atomic layer deposition (ALD) process to accelerate process learning and enhance manufacturability, we have explored new reactor designs and applied in-situ process sensing to W and HfO 2 ALD processes. A novel wafer scale ALD reactor, which features fast gas switching, good process sensing compatibility and significant similarity to the real manufacturing environment, is constructed. The reactor has a unique movable reactor cap design that allows two possible operation modes: (1) steady-state flow with alternating gas species; or (2) fill-and-pump-out cycling of each gas, accelerating the pump-out by lifting the cap to employ the large chamber volume as ballast. Downstream quadrupole mass spectrometry (QMS) sampling is applied for in-situ process sensing of tungsten ALD process. The QMS reveals essential surface reaction dynamics through real-time signals associated with byproduct generation as well as precursor introduction and depletion for each ALD half cycle, which are then used for process learning and optimization. More subtle interactions such as imperfect surface saturation and reactant dose interaction are also directly observed by QMS, indicating that ALD process is more complicated than the suggested layer-by-layer growth. By integrating in real-time the byproduct QMS signals over each exposure and plotting it against process cycle number, the deposition kinetics on the wafer is directly measured. For continuous ALD runs, the total integrated byproduct QMS signal in each ALD run is also linear to ALD film thickness, and therefore can be used for ALD film thickness metrology. The in-situ process sensing is also applied to HfO2 ALD process that is carried out in a furnace type ALD reactor. Precursor dose end-point control is applied to precisely control the precursor dose in each half cycle. Multiple process sensors, including quartz crystal microbalance (QCM) and QMS are used to provide real time process information. The sensing results confirm the proposed surface reaction path and once again reveal the complexity of ALD processes. The impact of this work includes: (1) It explores new ALD reactor designs which enable the implementation of in-situ process sensors for rapid process learning and enhanced manufacturability; (2) It demonstrates in the first time that in-situ QMS can reveal detailed process dynamics and film growth kinetics in wafer-scale ALD process, and thus can be used for ALD film thickness metrology. (3) Based on results from two different processes carried out in two different reactors, it is clear that ALD is a more complicated process than normally believed or advertised, but real-time observation of the operational chemistries in ALD by in-situ sensors provides critical insight to the process and the basis for more effective process control for ALD applications.

  1. Implicit Processes, Self-Regulation, and Interventions for Behavior Change.

    PubMed

    St Quinton, Tom; Brunton, Julie A

    2017-01-01

    The ability to regulate and subsequently change behavior is influenced by both reflective and implicit processes. Traditional theories have focused on conscious processes by highlighting the beliefs and intentions that influence decision making. However, their success in changing behavior has been modest with a gap between intention and behavior apparent. Dual-process models have been recently applied to health psychology; with numerous models incorporating implicit processes that influence behavior as well as the more common conscious processes. Such implicit processes are theorized to govern behavior non-consciously. The article provides a commentary on motivational and volitional processes and how interventions have combined to attempt an increase in positive health behaviors. Following this, non-conscious processes are discussed in terms of their theoretical underpinning. The article will then highlight how these processes have been measured and will then discuss the different ways that the non-conscious and conscious may interact. The development of interventions manipulating both processes may well prove crucial in successfully altering behavior.

  2. All varieties of encoding variability are not created equal: Separating variable processing from variable tasks

    PubMed Central

    Huff, Mark J.; Bodner, Glen E.

    2014-01-01

    Whether encoding variability facilitates memory is shown to depend on whether item-specific and relational processing are both performed across study blocks, and whether study items are weakly versus strongly related. Variable-processing groups studied a word list once using an item-specific task and once using a relational task. Variable-task groups’ two different study tasks recruited the same type of processing each block. Repeated-task groups performed the same study task each block. Recall and recognition were greatest in the variable-processing group, but only with weakly related lists. A variable-processing benefit was also found when task-based processing and list-type processing were complementary (e.g., item-specific processing of a related list) rather than redundant (e.g., relational processing of a related list). That performing both item-specific and relational processing across trials, or within a trial, yields encoding-variability benefits may help reconcile decades of contradictory findings in this area. PMID:25018583

  3. Continuous welding of unidirectional fiber reinforced thermoplastic tape material

    NASA Astrophysics Data System (ADS)

    Schledjewski, Ralf

    2017-10-01

    Continuous welding techniques like thermoplastic tape placement with in situ consolidation offer several advantages over traditional manufacturing processes like autoclave consolidation, thermoforming, etc. However, still there is a need to solve several important processing issues before it becomes a viable economic process. Intensive process analysis and optimization has been carried out in the past through experimental investigation, model definition and simulation development. Today process simulation is capable to predict resulting consolidation quality. Effects of material imperfections or process parameter variations are well known. But using this knowledge to control the process based on online process monitoring and according adaption of the process parameters is still challenging. Solving inverse problems and using methods for automated code generation allowing fast implementation of algorithms on targets are required. The paper explains the placement technique in general. Process-material-property-relationships and typical material imperfections are described. Furthermore, online monitoring techniques and how to use them for a model based process control system are presented.

  4. Economics of polysilicon process: A view from Japan

    NASA Technical Reports Server (NTRS)

    Shimizu, Y.

    1986-01-01

    The production process of solar grade silicon (SOG-Si) through trichlorosilane (TCS) was researched in a program sponsored by New Energy Development Organization (NEDO). The NEDO process consists of the following two steps: TCS production from by-product silicon tetrachloride (STC) and SOG-Si formation from TCS using a fluidized bed reactor. Based on the data obtained during the research program, the manufacturing cost of the NEDO process and other polysilicon manufacturing processes were compared. The manufacturing cost was calculated on the basis of 1000 tons/year production. The cost estimate showed that the cost of producing silicon by all of the new processes is less than the cost by the conventional Siemens process. Using a new process, the cost of producing semiconductor grade silicon was found to be virtually the same with any to the TCS, diclorosilane, and monosilane processes when by-products were recycled. The SOG-Si manufacturing processes using the fluidized bed reactor, which needs further development, shows a greater probablility of cost reduction than the filament processes.

  5. Autonomous Agents for Dynamic Process Planning in the Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Nik Nejad, Hossein Tehrani; Sugimura, Nobuhiro; Iwamura, Koji; Tanimizu, Yoshitaka

    Rapid changes of market demands and pressures of competition require manufacturers to maintain highly flexible manufacturing systems to cope with a complex manufacturing environment. This paper deals with development of an agent-based architecture of dynamic systems for incremental process planning in the manufacturing systems. In consideration of alternative manufacturing processes and machine tools, the process plans and the schedules of the manufacturing resources are generated incrementally and dynamically. A negotiation protocol is discussed, in this paper, to generate suitable process plans for the target products real-timely and dynamically, based on the alternative manufacturing processes. The alternative manufacturing processes are presented by the process plan networks discussed in the previous paper, and the suitable process plans are searched and generated to cope with both the dynamic changes of the product specifications and the disturbances of the manufacturing resources. We initiatively combine the heuristic search algorithms of the process plan networks with the negotiation protocols, in order to generate suitable process plans in the dynamic manufacturing environment.

  6. Implicit Processes, Self-Regulation, and Interventions for Behavior Change

    PubMed Central

    St Quinton, Tom; Brunton, Julie A.

    2017-01-01

    The ability to regulate and subsequently change behavior is influenced by both reflective and implicit processes. Traditional theories have focused on conscious processes by highlighting the beliefs and intentions that influence decision making. However, their success in changing behavior has been modest with a gap between intention and behavior apparent. Dual-process models have been recently applied to health psychology; with numerous models incorporating implicit processes that influence behavior as well as the more common conscious processes. Such implicit processes are theorized to govern behavior non-consciously. The article provides a commentary on motivational and volitional processes and how interventions have combined to attempt an increase in positive health behaviors. Following this, non-conscious processes are discussed in terms of their theoretical underpinning. The article will then highlight how these processes have been measured and will then discuss the different ways that the non-conscious and conscious may interact. The development of interventions manipulating both processes may well prove crucial in successfully altering behavior. PMID:28337164

  7. Models of recognition: a review of arguments in favor of a dual-process account.

    PubMed

    Diana, Rachel A; Reder, Lynne M; Arndt, Jason; Park, Heekyeong

    2006-02-01

    The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models.

  8. Integrating Thermal Tools Into the Mechanical Design Process

    NASA Technical Reports Server (NTRS)

    Tsuyuki, Glenn T.; Siebes, Georg; Novak, Keith S.; Kinsella, Gary M.

    1999-01-01

    The intent of mechanical design is to deliver a hardware product that meets or exceeds customer expectations, while reducing cycle time and cost. To this end, an integrated mechanical design process enables the idea of parallel development (concurrent engineering). This represents a shift from the traditional mechanical design process. With such a concurrent process, there are significant issues that have to be identified and addressed before re-engineering the mechanical design process to facilitate concurrent engineering. These issues also assist in the integration and re-engineering of the thermal design sub-process since it resides within the entire mechanical design process. With these issues in mind, a thermal design sub-process can be re-defined in a manner that has a higher probability of acceptance, thus enabling an integrated mechanical design process. However, the actual implementation is not always problem-free. Experience in applying the thermal design sub-process to actual situations provides the evidence for improvement, but more importantly, for judging the viability and feasibility of the sub-process.

  9. [Monitoring method of extraction process for Schisandrae Chinensis Fructus based on near infrared spectroscopy and multivariate statistical process control].

    PubMed

    Xu, Min; Zhang, Lei; Yue, Hong-Shui; Pang, Hong-Wei; Ye, Zheng-Liang; Ding, Li

    2017-10-01

    To establish an on-line monitoring method for extraction process of Schisandrae Chinensis Fructus, the formula medicinal material of Yiqi Fumai lyophilized injection by combining near infrared spectroscopy with multi-variable data analysis technology. The multivariate statistical process control (MSPC) model was established based on 5 normal batches in production and 2 test batches were monitored by PC scores, DModX and Hotelling T2 control charts. The results showed that MSPC model had a good monitoring ability for the extraction process. The application of the MSPC model to actual production process could effectively achieve on-line monitoring for extraction process of Schisandrae Chinensis Fructus, and can reflect the change of material properties in the production process in real time. This established process monitoring method could provide reference for the application of process analysis technology in the process quality control of traditional Chinese medicine injections. Copyright© by the Chinese Pharmaceutical Association.

  10. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  11. Process yield improvements with process control terminal for varian serial ion implanters

    NASA Astrophysics Data System (ADS)

    Higashi, Harry; Soni, Ameeta; Martinez, Larry; Week, Ken

    Implant processes in a modern wafer production fab are extremely complex. There can be several types of misprocessing, i.e. wrong dose or species, double implants and missed implants. Process Control Terminals (PCT) for Varian 350Ds installed at Intel fabs were found to substantially reduce the number of misprocessing steps. This paper describes those misprocessing steps and their subsequent reduction with use of PCTs. Reliable and simple process control with serial process ion implanters has been in increasing demand. A well designed process control terminal greatly increases device yield by monitoring all pertinent implanter functions and enabling process engineering personnel to set up process recipes for simple and accurate system operation. By programming user-selectable interlocks, implant errors are reduced and those that occur are logged for further analysis and prevention. A process control terminal should also be compatible with office personal computers for greater flexibility in system use and data analysis. The impact from the capability of a process control terminal is increased productivity, ergo higher device yield.

  12. An Aspect-Oriented Framework for Business Process Improvement

    NASA Astrophysics Data System (ADS)

    Pourshahid, Alireza; Mussbacher, Gunter; Amyot, Daniel; Weiss, Michael

    Recently, many organizations invested in Business Process Management Systems (BPMSs) in order to automate and monitor their processes. Business Activity Monitoring is one of the essential modules of a BPMS as it provides the core monitoring capabilities. Although the natural step after process monitoring is process improvement, most of the existing systems do not provide the means to help users with the improvement step. In this paper, we address this issue by proposing an aspect-oriented framework that allows the impact of changes to business processes to be explored with what-if scenarios based on the most appropriate process redesign patterns among several possibilities. As the four cornerstones of a BPMS are process, goal, performance and validation views, these views need to be aligned automatically by any approach that intends to support automated improvement of business processes. Our framework therefore provides means to reflect process changes also in the other views of the business process. A health care case study presented as a proof of concept suggests that this novel approach is feasible.

  13. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  14. Combined mesophilic anaerobic and thermophilic aerobic digestion process for high-strength food wastewater to increase removal efficiency and reduce sludge discharge.

    PubMed

    Jang, H M; Park, S K; Ha, J H; Park, J M

    2014-01-01

    In this study, a process that combines the mesophilic anaerobic digestion (MAD) process with thermophilic aerobic digestion (TAD) for high-strength food wastewater (FWW) treatment was developed to examine the removal of organic matter and methane production. All effluent discharged from the MAD process was separated into solid and liquid portions. The liquid part was discarded and the sludge part was passed to the TAD process for further degradation. Then, the digested sludge from the TAD process was recycled back to the MAD unit to achieve low sludge discharge from the combined process. The reactor combination was operated in two phases: during Phase I, 40 d of total hydraulic retention time (HRT) was applied; during Phase II, 20 d was applied. HRT of the TAD process was fixed at 5 d. For a comparison, a control process (single-stage MAD) was operated with the same HRTs of the combined process. Our results indicated that the combined process showed over 90% total solids, volatile solids and chemical oxygen demand removal efficiencies. In addition, the combined process showed a significantly higher methane production rate than that of the control process. Consequently, the experimental data demonstrated that the combined MAD-TAD process was successfully employed for high-strength FWW treatment with highly efficient organic matter reduction and methane production.

  15. Leading processes of patient care and treatment in hierarchical healthcare organizations in Sweden--process managers' experiences.

    PubMed

    Nilsson, Kerstin; Sandoff, Mette

    2015-01-01

    The purpose of this study is to gain better understanding of the roles and functions of process managers by describing Swedish process managers' experiences of leading processes involving patient care and treatment when working in a hierarchical health-care organization. This study is based on an explorative design. The data were gathered from interviews with 12 process managers at three Swedish hospitals. These data underwent qualitative and interpretative analysis with a modified editing style. The process managers' experiences of leading processes in a hierarchical health-care organization are described under three themes: having or not having a mandate, exposure to conflict situations and leading process development. The results indicate a need for clarity regarding process manager's responsibility and work content, which need to be communicated to all managers and staff involved in the patient care and treatment process, irrespective of department. There also needs to be an emphasis on realistic expectations and orientation of the goals that are an intrinsic part of the task of being a process manager. Generalizations from the results of the qualitative interview studies are limited, but a deeper understanding of the phenomenon was reached, which, in turn, can be transferred to similar settings. This study contributes qualitative descriptions of leading care and treatment processes in a functional, hierarchical health-care organization from process managers' experiences, a subject that has not been investigated earlier.

  16. Information Technology Process Improvement Decision-Making: An Exploratory Study from the Perspective of Process Owners and Process Managers

    ERIC Educational Resources Information Center

    Lamp, Sandra A.

    2012-01-01

    There is information available in the literature that discusses information technology (IT) governance and investment decision making from an executive-level perception, yet there is little information available that offers the perspective of process owners and process managers pertaining to their role in IT process improvement and investment…

  17. 43 CFR 2884.17 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false How will BLM process my Processing...-WAY UNDER THE MINERAL LEASING ACT Applying for MLA Grants or TUPs § 2884.17 How will BLM process my... written agreement that describes how BLM will process your application. The final agreement consists of a...

  18. 43 CFR 2884.17 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false How will BLM process my Processing...-WAY UNDER THE MINERAL LEASING ACT Applying for MLA Grants or TUPs § 2884.17 How will BLM process my... written agreement that describes how BLM will process your application. The final agreement consists of a...

  19. 15 CFR 15.3 - Acceptance of service of process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Acceptance of service of process. 15.3... Process § 15.3 Acceptance of service of process. (a) Except as otherwise provided in this subpart, any... employee by law is to be served personally with process. Service of process in this case is inadequate when...

  20. Weaknesses in Applying a Process Approach in Industry Enterprises

    NASA Astrophysics Data System (ADS)

    Kučerová, Marta; Mĺkva, Miroslava; Fidlerová, Helena

    2012-12-01

    The paper deals with a process approach as one of the main principles of the quality management. Quality management systems based on process approach currently represents one of a proofed ways how to manage an organization. The volume of sales, costs and profit levels are influenced by quality of processes and efficient process flow. As results of the research project showed, there are some weaknesses in applying of the process approach in the industrial routine and it has been often only a formal change of the functional management to process management in many organizations in Slovakia. For efficient process management it is essential that companies take attention to the way how to organize their processes and seek for their continuous improvement.

  1. Is Primary-Process Cognition a Feature of Hypnosis?

    PubMed

    Finn, Michael T; Goldman, Jared I; Lyon, Gyrid B; Nash, Michael R

    2017-01-01

    The division of cognition into primary and secondary processes is an important part of contemporary psychoanalytic metapsychology. Whereas primary processes are most characteristic of unconscious thought and loose associations, secondary processes generally govern conscious thought and logical reasoning. It has been theorized that an induction into hypnosis is accompanied by a predomination of primary-process cognition over secondary-process cognition. The authors hypothesized that highly hypnotizable individuals would demonstrate more primary-process cognition as measured by a recently developed cognitive-perceptual task. This hypothesis was not supported. In fact, low hypnotizable participants demonstrated higher levels of primary-process cognition. Exploratory analyses suggested a more specific effect: felt connectedness to the hypnotist seemed to promote secondary-process cognition among low hypnotizable participants.

  2. [Dual process in large number estimation under uncertainty].

    PubMed

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  3. Object-processing neural efficiency differentiates object from spatial visualizers.

    PubMed

    Motes, Michael A; Malach, Rafael; Kozhevnikov, Maria

    2008-11-19

    The visual system processes object properties and spatial properties in distinct subsystems, and we hypothesized that this distinction might extend to individual differences in visual processing. We conducted a functional MRI study investigating the neural underpinnings of individual differences in object versus spatial visual processing. Nine participants of high object-processing ability ('object' visualizers) and eight participants of high spatial-processing ability ('spatial' visualizers) were scanned, while they performed an object-processing task. Object visualizers showed lower bilateral neural activity in lateral occipital complex and lower right-lateralized neural activity in dorsolateral prefrontal cortex. The data indicate that high object-processing ability is associated with more efficient use of visual-object resources, resulting in less neural activity in the object-processing pathway.

  4. Process simulation for advanced composites production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coatingmore » techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.« less

  5. CDO budgeting

    NASA Astrophysics Data System (ADS)

    Nesladek, Pavel; Wiswesser, Andreas; Sass, Björn; Mauermann, Sebastian

    2008-04-01

    The Critical dimension off-target (CDO) is a key parameter for mask house customer, affecting directly the performance of the mask. The CDO is the difference between the feature size target and the measured feature size. The change of CD during the process is either compensated within the process or by data correction. These compensation methods are commonly called process bias and data bias, respectively. The difference between data bias and process bias in manufacturing results in systematic CDO error, however, this systematic error does not take into account the instability of the process bias. This instability is a result of minor variations - instabilities of manufacturing processes and changes in materials and/or logistics. Using several masks the CDO of the manufacturing line can be estimated. For systematic investigation of the unit process contribution to CDO and analysis of the factors influencing the CDO contributors, a solid understanding of each unit process and huge number of masks is necessary. Rough identification of contributing processes and splitting of the final CDO variation between processes can be done with approx. 50 masks with identical design, material and process. Such amount of data allows us to identify the main contributors and estimate the effect of them by means of Analysis of variance (ANOVA) combined with multivariate analysis. The analysis does not provide information about the root cause of the variation within the particular unit process, however, it provides a good estimate of the impact of the process on the stability of the manufacturing line. Additionally this analysis can be used to identify possible interaction between processes, which cannot be investigated if only single processes are considered. Goal of this work is to evaluate limits for CDO budgeting models given by the precision and the number of measurements as well as partitioning the variation within the manufacturing process. The CDO variation splits according to the suggested model into contributions from particular processes or process groups. Last but not least the power of this method to determine the absolute strength of each parameter will be demonstrated. Identification of the root cause of this variation within the unit process itself is not scope of this work.

  6. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    PubMed

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  7. Consumers' conceptualization of ultra-processed foods.

    PubMed

    Ares, Gastón; Vidal, Leticia; Allegue, Gimena; Giménez, Ana; Bandeira, Elisa; Moratorio, Ximena; Molina, Verónika; Curutchet, María Rosa

    2016-10-01

    Consumption of ultra-processed foods has been associated with low diet quality, obesity and other non-communicable diseases. This situation makes it necessary to develop educational campaigns to discourage consumers from substituting meals based on unprocessed or minimally processed foods by ultra-processed foods. In this context, the aim of the present work was to investigate how consumers conceptualize the term ultra-processed foods and to evaluate if the foods they perceive as ultra-processed are in concordance with the products included in the NOVA classification system. An online study was carried out with 2381 participants. They were asked to explain what they understood by ultra-processed foods and to list foods that can be considered ultra-processed. Responses were analysed using inductive coding. The great majority of the participants was able to provide an explanation of what ultra-processed foods are, which was similar to the definition described in the literature. Most of the participants described ultra-processed foods as highly processed products that usually contain additives and other artificial ingredients, stressing that they have low nutritional quality and are unhealthful. The most relevant products for consumers' conceptualization of the term were in agreement with the NOVA classification system and included processed meats, soft drinks, snacks, burgers, powdered and packaged soups and noodles. However, some of the participants perceived processed foods, culinary ingredients and even some minimally processed foods as ultra-processed. This suggests that in order to accurately convey their message, educational campaigns aimed at discouraging consumers from consuming ultra-processed foods should include a clear definition of the term and describe some of their specific characteristics, such as the type of ingredients included in their formulation and their nutritional composition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Rapid communication: Global-local processing affects recognition of distractor emotional faces.

    PubMed

    Srinivasan, Narayanan; Gupta, Rashmi

    2011-03-01

    Recent studies have shown links between happy faces and global, distributed attention as well as sad faces to local, focused attention. Emotions have been shown to affect global-local processing. Given that studies on emotion-cognition interactions have not explored the effect of perceptual processing at different spatial scales on processing stimuli with emotional content, the present study investigated the link between perceptual focus and emotional processing. The study investigated the effects of global-local processing on the recognition of distractor faces with emotional expressions. Participants performed a digit discrimination task with digits at either the global level or the local level presented against a distractor face (happy or sad) as background. The results showed that global processing associated with broad scope of attention facilitates recognition of happy faces, and local processing associated with narrow scope of attention facilitates recognition of sad faces. The novel results of the study provide conclusive evidence for emotion-cognition interactions by demonstrating the effect of perceptual processing on emotional faces. The results along with earlier complementary results on the effect of emotion on global-local processing support a reciprocal relationship between emotional processing and global-local processing. Distractor processing with emotional information also has implications for theories of selective attention.

  9. Tomographical process monitoring of laser transmission welding with OCT

    NASA Astrophysics Data System (ADS)

    Ackermann, Philippe; Schmitt, Robert

    2017-06-01

    Process control of laser processes still encounters many obstacles. Although these processes are stable, a narrow process parameter window during the process or process deviations have led to an increase on the requirements for the process itself and on monitoring devices. Laser transmission welding as a contactless and locally limited joining technique is well-established in a variety of demanding production areas. For example, sensitive parts demand a particle-free joining technique which does not affect the inner components. Inline integrated non-destructive optical measurement systems capable of providing non-invasive tomographical images of the transparent material, the weld seam and its surrounding areas with micron resolution would improve the overall process. Obtained measurement data enable qualitative feedback into the system to adapt parameters for a more robust process. Within this paper we present the inline monitoring device based on Fourier-domain optical coherence tomography developed within the European-funded research project "Manunet Weldable". This device, after adaptation to the laser transmission welding process is optically and mechanically integrated into the existing laser system. The main target lies within the inline process control destined to extract tomographical geometrical measurement data from the weld seam forming process. Usage of this technology makes offline destructive testing of produced parts obsolete. 1,2,3,4

  10. A quality-refinement process for medical imaging applications.

    PubMed

    Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I

    2009-01-01

    To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.

  11. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  12. [Process management in the hospital pharmacy for the improvement of the patient safety].

    PubMed

    Govindarajan, R; Perelló-Juncá, A; Parès-Marimòn, R M; Serrais-Benavente, J; Ferrandez-Martí, D; Sala-Robinat, R; Camacho-Calvente, A; Campabanal-Prats, C; Solà-Anderiu, I; Sanchez-Caparrós, S; Gonzalez-Estrada, J; Martinez-Olalla, P; Colomer-Palomo, J; Perez-Mañosas, R; Rodríguez-Gallego, D

    2013-01-01

    To define a process management model for a hospital pharmacy in order to measure, analyse and make continuous improvements in patient safety and healthcare quality. In order to implement process management, Igualada Hospital was divided into different processes, one of which was the Hospital Pharmacy. A multidisciplinary management team was given responsibility for each process. For each sub-process one person was identified to be responsible, and a working group was formed under his/her leadership. With the help of each working group, a risk analysis using failure modes and effects analysis (FMEA) was performed, and the corresponding improvement actions were implemented. Sub-process indicators were also identified, and different process management mechanisms were introduced. The first risk analysis with FMEA produced more than thirty preventive actions to improve patient safety. Later, the weekly analysis of errors, as well as the monthly analysis of key process indicators, permitted us to monitor process results and, as each sub-process manager participated in these meetings, also to assume accountability and responsibility, thus consolidating the culture of excellence. The introduction of different process management mechanisms, with the participation of people responsible for each sub-process, introduces a participative management tool for the continuous improvement of patient safety and healthcare quality. Copyright © 2012 SECA. Published by Elsevier Espana. All rights reserved.

  13. Distributed processing method for arbitrary view generation in camera sensor network

    NASA Astrophysics Data System (ADS)

    Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki

    2003-05-01

    Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.

  14. EEG alpha synchronization is related to top-down processing in convergent and divergent thinking

    PubMed Central

    Benedek, Mathias; Bergner, Sabine; Könen, Tanja; Fink, Andreas; Neubauer, Aljoscha C.

    2011-01-01

    Synchronization of EEG alpha activity has been referred to as being indicative of cortical idling, but according to more recent evidence it has also been associated with active internal processing and creative thinking. The main objective of this study was to investigate to what extent EEG alpha synchronization is related to internal processing demands and to specific cognitive process involved in creative thinking. To this end, EEG was measured during a convergent and a divergent thinking task (i.e., creativity-related task) which once were processed involving low and once involving high internal processing demands. High internal processing demands were established by masking the stimulus (after encoding) and thus preventing further bottom-up processing. Frontal alpha synchronization was observed during convergent and divergent thinking only under exclusive top-down control (high internal processing demands), but not when bottom-up processing was allowed (low internal processing demands). We conclude that frontal alpha synchronization is related to top-down control rather than to specific creativity-related cognitive processes. Frontal alpha synchronization, which has been observed in a variety of different creativity tasks, thus may not reflect a brain state that is specific for creative cognition but can probably be attributed to high internal processing demands which are typically involved in creative thinking. PMID:21925520

  15. Kennedy Space Center Payload Processing

    NASA Technical Reports Server (NTRS)

    Lawson, Ronnie; Engler, Tom; Colloredo, Scott; Zide, Alan

    2011-01-01

    This slide presentation reviews the payload processing functions at Kennedy Space Center. It details some of the payloads processed at KSC, the typical processing tasks, the facilities available for processing payloads, and the capabilities and customer services that are available.

  16. Improving the Document Development Process: Integrating Relational Data and Statistical Process Control.

    ERIC Educational Resources Information Center

    Miller, John

    1994-01-01

    Presents an approach to document numbering, document titling, and process measurement which, when used with fundamental techniques of statistical process control, reveals meaningful process-element variation as well as nominal productivity models. (SR)

  17. USE OF INDICATOR ORGANISMS FOR DETERMINING PROCESS EFFECTIVENESS

    EPA Science Inventory

    Wastewaters, process effluents and treatment process residuals contain a variety of microorganisms. Many factors influence their densities as they move through collection systems and process equipment. Biological treatment systems rely on the catabolic processes of such microor...

  18. Food processing by high hydrostatic pressure.

    PubMed

    Yamamoto, Kazutaka

    2017-04-01

    High hydrostatic pressure (HHP) process, as a nonthermal process, can be used to inactivate microbes while minimizing chemical reactions in food. In this regard, a HHP level of 100 MPa (986.9 atm/1019.7 kgf/cm 2 ) and more is applied to food. Conventional thermal process damages food components relating color, flavor, and nutrition via enhanced chemical reactions. However, HHP process minimizes the damages and inactivates microbes toward processing high quality safe foods. The first commercial HHP-processed foods were launched in 1990 as fruit products such as jams, and then some other products have been commercialized: retort rice products (enhanced water impregnation), cooked hams and sausages (shelf life extension), soy sauce with minimized salt (short-time fermentation owing to enhanced enzymatic reactions), and beverages (shelf life extension). The characteristics of HHP food processing are reviewed from viewpoints of nonthermal process, history, research and development, physical and biochemical changes, and processing equipment.

  19. [Near infrared spectroscopy based process trajectory technology and its application in monitoring and controlling of traditional Chinese medicine manufacturing process].

    PubMed

    Li, Wen-Long; Qu, Hai-Bin

    2016-10-01

    In this paper, the principle of NIRS (near infrared spectroscopy)-based process trajectory technology was introduced.The main steps of the technique include:① in-line collection of the processes spectra of different technics; ② unfolding of the 3-D process spectra;③ determination of the process trajectories and their normal limits;④ monitoring of the new batches with the established MSPC (multivariate statistical process control) models.Applications of the technology in the chemical and biological medicines were reviewed briefly. By a comprehensive introduction of our feasibility research on the monitoring of traditional Chinese medicine technical process using NIRS-based multivariate process trajectories, several important problems of the practical applications which need urgent solutions are proposed, and also the application prospect of the NIRS-based process trajectory technology is fully discussed and put forward in the end. Copyright© by the Chinese Pharmaceutical Association.

  20. Recollection is a continuous process: implications for dual-process theories of recognition memory.

    PubMed

    Mickes, Laura; Wais, Peter E; Wixted, John T

    2009-04-01

    Dual-process theory, which holds that recognition decisions can be based on recollection or familiarity, has long seemed incompatible with signal detection theory, which holds that recognition decisions are based on a singular, continuous memory-strength variable. Formal dual-process models typically regard familiarity as a continuous process (i.e., familiarity comes in degrees), but they construe recollection as a categorical process (i.e., recollection either occurs or does not occur). A continuous process is characterized by a graded relationship between confidence and accuracy, whereas a categorical process is characterized by a binary relationship such that high confidence is associated with high accuracy but all lower degrees of confidence are associated with chance accuracy. Using a source-memory procedure, we found that the relationship between confidence and source-recollection accuracy was graded. Because recollection, like familiarity, is a continuous process, dual-process theory is more compatible with signal detection theory than previously thought.

  1. A qualitative assessment of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of two Gaussian processes and the sum of that product with a third Gaussian process. The resulting total random process is interpreted as the sum of an amplitude modulated process and a slowly varying, random mean value. The properties of the process are examined, including an interpretation of the process in terms of the physical structure of atmospheric motions. The inclusion of the mean value variation gives an improved representation of the properties of atmospheric motions, since the resulting process can account for the differences in the statistical properties of atmospheric velocity components and their gradients. The application of the process to atmospheric turbulence problems, including the response of aircraft dynamic systems, is examined. The effects of the mean value variation upon aircraft loads are small in most cases, but can be important in the measurement and interpretation of atmospheric turbulence data.

  2. Metals Recovery from Artificial Ore in Case of Printed Circuit Boards, Using Plasmatron Plasma Reactor

    PubMed Central

    Szałatkiewicz, Jakub

    2016-01-01

    This paper presents the investigation of metals production form artificial ore, which consists of printed circuit board (PCB) waste, processed in plasmatron plasma reactor. A test setup was designed and built that enabled research of plasma processing of PCB waste of more than 700 kg/day scale. The designed plasma process is presented and discussed. The process in tests consumed 2 kWh/kg of processed waste. Investigation of the process products is presented with their elemental analyses of metals and slag. The average recovery of metals in presented experiments is 76%. Metals recovered include: Ag, Au, Pd, Cu, Sn, Pb, and others. The chosen process parameters are presented: energy consumption, throughput, process temperatures, and air consumption. Presented technology allows processing of variable and hard-to-process printed circuit board waste that can reach up to 100% of the input mass. PMID:28773804

  3. Characterisation and Processing of Some Iron Ores of India

    NASA Astrophysics Data System (ADS)

    Krishna, S. J. G.; Patil, M. R.; Rudrappa, C.; Kumar, S. P.; Ravi, B. P.

    2013-10-01

    Lack of process characterization data of the ores based on the granulometry, texture, mineralogy, physical, chemical, properties, merits and limitations of process, market and local conditions may mislead the mineral processing entrepreneur. The proper implementation of process characterization and geotechnical map data will result in optimized sustainable utilization of resource by processing. A few case studies of process characterization of some Indian iron ores are dealt with. The tentative ascending order of process refractoriness of iron ores is massive hematite/magnetite < marine black iron oxide sands < laminated soft friable siliceous ore fines < massive banded magnetite quartzite < laminated soft friable clayey aluminous ore fines < massive banded hematite quartzite/jasper < massive clayey hydrated iron oxide ore < manganese bearing iron ores massive < Ti-V bearing magnetite magmatic ore < ferruginous cherty quartzite. Based on diagnostic process characterization, the ores have been classified and generic process have been adopted for some Indian iron ores.

  4. Measuring health care process quality with software quality measures.

    PubMed

    Yildiz, Ozkan; Demirörs, Onur

    2012-01-01

    Existing quality models focus on some specific diseases, clinics or clinical areas. Although they contain structure, process, or output type measures, there is no model which measures quality of health care processes comprehensively. In addition, due to the not measured overall process quality, hospitals cannot compare quality of processes internally and externally. To bring a solution to above problems, a new model is developed from software quality measures. We have adopted the ISO/IEC 9126 software quality standard for health care processes. Then, JCIAS (Joint Commission International Accreditation Standards for Hospitals) measurable elements were added to model scope for unifying functional requirements. Assessment (diagnosing) process measurement results are provided in this paper. After the application, it was concluded that the model determines weak and strong aspects of the processes, gives a more detailed picture for the process quality, and provides quantifiable information to hospitals to compare their processes with multiple organizations.

  5. Thermal Stir Welding: A New Solid State Welding Process

    NASA Technical Reports Server (NTRS)

    Ding, R. Jeffrey

    2003-01-01

    Thermal stir welding is a new welding process developed at NASA's Marshall Space Flight Center in Huntsville, AL. Thermal stir welding is similar to friction stir welding in that it joins similar or dissimilar materials without melting the parent material. However, unlike friction stir welding, the heating, stirring and forging elements of the process are all independent of each other and are separately controlled. Furthermore, the heating element of the process can be either a solid-state process (such as a thermal blanket, induction type process, etc), or, a fusion process (YG laser, plasma torch, etc.) The separation of the heating, stirring, forging elements of the process allows more degrees of freedom for greater process control. This paper introduces the mechanics of the thermal stir welding process. In addition, weld mechanical property data is presented for selected alloys as well as metallurgical analysis.

  6. Thermal Stir Welding: A New Solid State Welding Process

    NASA Technical Reports Server (NTRS)

    Ding, R. Jeffrey; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    Thermal stir welding is a new welding process developed at NASA's Marshall Space Flight Center in Huntsville, AL. Thermal stir welding is similar to friction stir welding in that it joins similar or dissimilar materials without melting the parent material. However, unlike friction stir welding, the heating, stirring and forging elements of the process are all independent of each other and are separately controlled. Furthermore, the heating element of the process can be either a solid-state process (such as a thermal blanket, induction type process, etc), or, a fusion process (YG laser, plasma torch, etc.) The separation of the heating, stirring, forging elements of the process allows more degrees of freedom for greater process control. This paper introduces the mechanics of the thermal stir welding process. In addition, weld mechanical property data is presented for selected alloys as well as metallurgical analysis.

  7. Metals Recovery from Artificial Ore in Case of Printed Circuit Boards, Using Plasmatron Plasma Reactor.

    PubMed

    Szałatkiewicz, Jakub

    2016-08-10

    This paper presents the investigation of metals production form artificial ore, which consists of printed circuit board (PCB) waste, processed in plasmatron plasma reactor. A test setup was designed and built that enabled research of plasma processing of PCB waste of more than 700 kg/day scale. The designed plasma process is presented and discussed. The process in tests consumed 2 kWh/kg of processed waste. Investigation of the process products is presented with their elemental analyses of metals and slag. The average recovery of metals in presented experiments is 76%. Metals recovered include: Ag, Au, Pd, Cu, Sn, Pb, and others. The chosen process parameters are presented: energy consumption, throughput, process temperatures, and air consumption. Presented technology allows processing of variable and hard-to-process printed circuit board waste that can reach up to 100% of the input mass.

  8. The origins of levels-of-processing effects in a conceptual test: evidence for automatic influences of memory from the process-dissociation procedure.

    PubMed

    Bergerbest, Dafna; Goshen-Gottstein, Yonatan

    2002-12-01

    In three experiments, we explored automatic influences of memory in a conceptual memory task, as affected by a levels-of-processing (LoP) manipulation. We also explored the origins of the LoP effect by examining whether the effect emerged only when participants in the shallow condition truncated the perceptual processing (the lexical-processing hypothesis) or even when the entire word was encoded in this condition (the conceptual-processing hypothesis). Using the process-dissociation procedure and an implicit association-generation task, we found that the deep encoding condition yielded higher estimates of automatic influences than the shallow condition. In support of the conceptual processing hypothesis, the LoP effect was found even when the shallow task did not lead to truncated processing of the lexical units. We suggest that encoding for meaning is a prerequisite for automatic processing on conceptual tests of memory.

  9. Exploring business process modelling paradigms and design-time to run-time transitions

    NASA Astrophysics Data System (ADS)

    Caron, Filip; Vanthienen, Jan

    2016-09-01

    The business process management literature describes a multitude of approaches (e.g. imperative, declarative or event-driven) that each result in a different mix of process flexibility, compliance, effectiveness and efficiency. Although the use of a single approach over the process lifecycle is often assumed, transitions between approaches at different phases in the process lifecycle may also be considered. This article explores several business process strategies by analysing the approaches at different phases in the process lifecycle as well as the various transitions.

  10. System Engineering Concept Demonstration, Process Model. Volume 3

    DTIC Science & Technology

    1992-12-01

    Process or Process Model The System Engineering process must be the enactment of the aforementioned definitions. Therefore, a process is an enactment of a...Prototype Tradeoff Scenario demonstrates six levels of abstraction in the Process Model. The Process Model symbology is explained within the "Help" icon ...dnofing no- ubeq t"vidi e /hn -am-a. lmi IzyuO ..pu Row _e._n au"c.ue-w’ ’- anuiildyidwile b ie htplup ?~imsav D symbo ,,ue,.dvu ,,dienl Flw s--..,fu..I

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eun, H.C.; Cho, Y.Z.; Choi, J.H.

    A regeneration process of LiCl-KCl eutectic waste salt generated from the pyrochemical process of spent nuclear fuel has been studied. This regeneration process is composed of a chemical conversion process and a vacuum distillation process. Through the regeneration process, a high efficiency of renewable salt recovery can be obtained from the waste salt and rare earth nuclides in the waste salt can be separated as oxide or phosphate forms. Thus, the regeneration process can contribute greatly to a reduction of the waste volume and a creation of durable final waste forms. (authors)

  12. An open system approach to process reengineering in a healthcare operational environment.

    PubMed

    Czuchry, A J; Yasin, M M; Norris, J

    2000-01-01

    The objective of this study is to examine the applicability of process reengineering in a healthcare operational environment. The intake process of a mental healthcare service delivery system is analyzed systematically to identify process-related problems. A methodology which utilizes an open system orientation coupled with process reengineering is utilized to overcome operational and patient related problems associated with the pre-reengineered intake process. The systematic redesign of the intake process resulted in performance improvements in terms of cost, quality, service and timing.

  13. Developing the JPL Engineering Processes

    NASA Technical Reports Server (NTRS)

    Linick, Dave; Briggs, Clark

    2004-01-01

    This paper briefly recounts the recent history of process reengineering at the NASA Jet Propulsion Laboratory, with a focus on the engineering processes. The JPL process structure is described and the process development activities of the past several years outlined. The main focus of the paper is on the current process structure, the emphasis on the flight project life cycle, the governance approach that lead to Flight Project Practices, and the remaining effort to capture process knowledge at the detail level of the work group.

  14. Water-saving liquid-gas conditioning system

    DOEpatents

    Martin, Christopher; Zhuang, Ye

    2014-01-14

    A method for treating a process gas with a liquid comprises contacting a process gas with a hygroscopic working fluid in order to remove a constituent from the process gas. A system for treating a process gas with a liquid comprises a hygroscopic working fluid comprising a component adapted to absorb or react with a constituent of a process gas, and a liquid-gas contactor for contacting the working fluid and the process gas, wherein the constituent is removed from the process gas within the liquid-gas contactor.

  15. Model for Simulating a Spiral Software-Development Process

    NASA Technical Reports Server (NTRS)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.

  16. Magnitude processing of symbolic and non-symbolic proportions: an fMRI study.

    PubMed

    Mock, Julia; Huber, Stefan; Bloechle, Johannes; Dietrich, Julia F; Bahnmueller, Julia; Rennig, Johannes; Klein, Elise; Moeller, Korbinian

    2018-05-10

    Recent research indicates that processing proportion magnitude is associated with activation in the intraparietal sulcus. Thus, brain areas associated with the processing of numbers (i.e., absolute magnitude) were activated during processing symbolic fractions as well as non-symbolic proportions. Here, we investigated systematically the cognitive processing of symbolic (e.g., fractions and decimals) and non-symbolic proportions (e.g., dot patterns and pie charts) in a two-stage procedure. First, we investigated relative magnitude-related activations of proportion processing. Second, we evaluated whether symbolic and non-symbolic proportions share common neural substrates. We conducted an fMRI study using magnitude comparison tasks with symbolic and non-symbolic proportions, respectively. As an indicator for magnitude-related processing of proportions, the distance effect was evaluated. A conjunction analysis indicated joint activation of specific occipito-parietal areas including right intraparietal sulcus (IPS) during proportion magnitude processing. More specifically, results indicate that the IPS, which is commonly associated with absolute magnitude processing, is involved in processing relative magnitude information as well, irrespective of symbolic or non-symbolic presentation format. However, we also found distinct activation patterns for the magnitude processing of the different presentation formats. Our findings suggest that processing for the separate presentation formats is not only associated with magnitude manipulations in the IPS, but also increasing demands on executive functions and strategy use associated with frontal brain regions as well as visual attention and encoding in occipital regions. Thus, the magnitude processing of proportions may not exclusively reflect processing of number magnitude information but also rather domain-general processes.

  17. [Alcohol-purification technology and its particle sedimentation process in manufactory of Fufang Kushen injection].

    PubMed

    Liu, Xiaoqian; Tong, Yan; Wang, Jinyu; Wang, Ruizhen; Zhang, Yanxia; Wang, Zhimin

    2011-11-01

    Fufang Kushen injection was selected as the model drug, to optimize its alcohol-purification process and understand the characteristics of particle sedimentation process, and to investigate the feasibility of using process analytical technology (PAT) on traditional Chinese medicine (TCM) manufacturing. Total alkaloids (calculated by matrine, oxymatrine, sophoridine and oxysophoridine) and macrozamin were selected as quality evaluation markers to optimize the process of Fufang Kushen injection purification with alcohol. Process parameters of particulate formed in the alcohol-purification, such as the number, density and sedimentation velocity, were also determined to define the sedimentation time and well understand the process. The purification process was optimized as that alcohol is added to the concentrated extract solution (drug material) to certain concentration for 2 times and deposited the alcohol-solution containing drug-material to sediment for some time, i.e. 60% alcohol deposited for 36 hours, filter and then 80% -90% alcohol deposited for 6 hours in turn. The content of total alkaloids was decreased a little during the depositing process. The average settling time of particles with the diameters of 10, 25 microm were 157.7, 25.2 h in the first alcohol-purified process, and 84.2, 13.5 h in the second alcohol-purified process, respectively. The optimized alcohol-purification process remains the marker compositions better and compared with the initial process, it's time saving and much economy. The manufacturing quality of TCM-injection can be controlled by process. PAT pattern must be designed under the well understanding of process of TCM production.

  18. Application of volume-retarded osmosis and low-pressure membrane hybrid process for water reclamation.

    PubMed

    Im, Sung-Ju; Choi, Jungwon; Lee, Jung-Gil; Jeong, Sanghyun; Jang, Am

    2018-03-01

    A new concept of volume-retarded osmosis and low-pressure membrane (VRO-LPM) hybrid process was developed and evaluated for the first time in this study. Commercially available forward osmosis (FO) and ultrafiltration (UF) membranes were employed in a VRO-LPM hybrid process to overcome energy limitations of draw solution (DS) regeneration and production of permeate in the FO process. To evaluate its feasibility as a water reclamation process, and to optimize the operational conditions, cross-flow FO and dead-end mode UF processes were individually evaluated. For the FO process, a DS concentration of 0.15 g mL -1 of polysulfonate styrene (PSS) was determined to be optimal, having a high flux with a low reverse salt flux. The UF membrane with a molecular weight cut-off of 1 kDa was chosen for its high PSS rejection in the LPM process. As a single process, UF (LPM) exhibited a higher flux than FO, but this could be controlled by adjusting the effective membrane area of the FO and UF membranes in the VRO-LPM system. The VRO-LPM hybrid process only required a circulation pump for the FO process. This led to a decrease in the specific energy consumption of the VRO-LPM process for potable water production, that was similar to the single FO process. Therefore, the newly developed VRO-LPM hybrid process, with an appropriate DS selection, can be used as an energy efficient water production method, and can outperform conventional water reclamation processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Quality control process improvement of flexible printed circuit board by FMEA

    NASA Astrophysics Data System (ADS)

    Krasaephol, Siwaporn; Chutima, Parames

    2018-02-01

    This research focuses on the quality control process improvement of Flexible Printed Circuit Board (FPCB), centred around model 7-Flex, by using Failure Mode and Effect Analysis (FMEA) method to decrease proportion of defective finished goods that are found at the final inspection process. Due to a number of defective units that were found at the final inspection process, high scraps may be escaped to customers. The problem comes from poor quality control process which is not efficient enough to filter defective products from in-process because there is no In-Process Quality Control (IPQC) or sampling inspection in the process. Therefore, the quality control process has to be improved by setting inspection gates and IPCQs at critical processes in order to filter the defective products. The critical processes are analysed by the FMEA method. IPQC is used for detecting defective products and reducing chances of defective finished goods escaped to the customers. Reducing proportion of defective finished goods also decreases scrap cost because finished goods incur higher scrap cost than work in-process. Moreover, defective products that are found during process can reflect the abnormal processes; therefore, engineers and operators should timely solve the problems. Improved quality control was implemented for 7-Flex production lines from July 2017 to September 2017. The result shows decreasing of the average proportion of defective finished goods and the average of Customer Manufacturers Lot Reject Rate (%LRR of CMs) equal to 4.5% and 4.1% respectively. Furthermore, cost saving of this quality control process equals to 100K Baht.

  20. Formulating poultry processing sanitizers from alkaline salts of fatty acids

    USDA-ARS?s Scientific Manuscript database

    Though some poultry processing operations remove microorganisms from carcasses; other processing operations cause cross-contamination that spreads microorganisms between carcasses, processing water, and processing equipment. One method used by commercial poultry processors to reduce microbial contam...

  1. Fabrication Process for Cantilever Beam Micromechanical Switches

    DTIC Science & Technology

    1993-08-01

    Beam Design ................................................................... 13 B. Chemistry and Materials Used in Cantilever Beam Process...7 3. Photomask levels and composite...pp 410-413. 5 2. Cantilever Beam Fabrication Process The beam fabrication process incorporates four different photomasking levels with 62 processing

  2. Reports of planetary geology program, 1983

    NASA Technical Reports Server (NTRS)

    Holt, H. E. (Compiler)

    1984-01-01

    Several areas of the Planetary Geology Program were addressed including outer solar system satellites, asteroids, comets, Venus, cratering processes and landform development, volcanic processes, aeolian processes, fluvial processes, periglacial and permafrost processes, geomorphology, remote sensing, tectonics and stratigraphy, and mapping.

  3. Cognitive Processes in Discourse Comprehension: Passive Processes, Reader-Initiated Processes, and Evolving Mental Representations

    ERIC Educational Resources Information Center

    van den Broek, Paul; Helder, Anne

    2017-01-01

    As readers move through a text, they engage in various types of processes that, if all goes well, result in a mental representation that captures their interpretation of the text. With each new text segment the reader engages in passive and, at times, reader-initiated processes. These processes are strongly influenced by the readers'…

  4. The Use of Knowledge Based Decision Support Systems in Reengineering Selected Processes in the U. S. Marine Corps

    DTIC Science & Technology

    2001-09-01

    measurable benefit in terms of process efficiency and effectiveness, business process reengineering (BPR) is becoming increasingly important. BPR suggests...technology by businesses in hopes of achieving a measurable benefit in terms of process efficiency and effectiveness, business process...KOPER-LITE ........................................13 E. HOW MIGHT THE MILITARY BENEFIT FROM PROCESS REENGINEERING EFFORTS

  5. 30 CFR 206.181 - How do I establish processing costs for dual accounting purposes when I do not process the gas?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accounting purposes when I do not process the gas? 206.181 Section 206.181 Mineral Resources MINERALS... Processing Allowances § 206.181 How do I establish processing costs for dual accounting purposes when I do not process the gas? Where accounting for comparison (dual accounting) is required for gas production...

  6. Conceptual models of information processing

    NASA Technical Reports Server (NTRS)

    Stewart, L. J.

    1983-01-01

    The conceptual information processing issues are examined. Human information processing is defined as an active cognitive process that is analogous to a system. It is the flow and transformation of information within a human. The human is viewed as an active information seeker who is constantly receiving, processing, and acting upon the surrounding environmental stimuli. Human information processing models are conceptual representations of cognitive behaviors. Models of information processing are useful in representing the different theoretical positions and in attempting to define the limits and capabilities of human memory. It is concluded that an understanding of conceptual human information processing models and their applications to systems design leads to a better human factors approach.

  7. Industrial application of semantic process mining

    NASA Astrophysics Data System (ADS)

    Espen Ingvaldsen, Jon; Atle Gulla, Jon

    2012-05-01

    Process mining relates to the extraction of non-trivial and useful information from information system event logs. It is a new research discipline that has evolved significantly since the early work on idealistic process logs. Over the last years, process mining prototypes have incorporated elements from semantics and data mining and targeted visualisation techniques that are more user-friendly to business experts and process owners. In this article, we present a framework for evaluating different aspects of enterprise process flows and address practical challenges of state-of-the-art industrial process mining. We also explore the inherent strengths of the technology for more efficient process optimisation.

  8. Reliability and performance of a system-on-a-chip by predictive wear-out based activation of functional components

    DOEpatents

    Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong

    2013-10-01

    A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit.

  9. Fuzzy control of burnout of multilayer ceramic actuators

    NASA Astrophysics Data System (ADS)

    Ling, Alice V.; Voss, David; Christodoulou, Leo

    1996-08-01

    To improve the yield and repeatability of the burnout process of multilayer ceramic actuators (MCAs), an intelligent processing of materials (IPM-based) control system has been developed for the manufacture of MCAs. IPM involves the active (ultimately adaptive) control of a material process using empirical or analytical models and in situ sensing of critical process states (part features and process parameters) to modify the processing conditions in real time to achieve predefined product goals. Thus, the three enabling technologies for the IPM burnout control system are process modeling, in situ sensing and intelligent control. This paper presents the design of an IPM-based control strategy for the burnout process of MCAs.

  10. Direct access inter-process shared memory

    DOEpatents

    Brightwell, Ronald B; Pedretti, Kevin; Hudson, Trammell B

    2013-10-22

    A technique for directly sharing physical memory between processes executing on processor cores is described. The technique includes loading a plurality of processes into the physical memory for execution on a corresponding plurality of processor cores sharing the physical memory. An address space is mapped to each of the processes by populating a first entry in a top level virtual address table for each of the processes. The address space of each of the processes is cross-mapped into each of the processes by populating one or more subsequent entries of the top level virtual address table with the first entry in the top level virtual address table from other processes.

  11. Biotechnology in Food Production and Processing

    NASA Astrophysics Data System (ADS)

    Knorr, Dietrich; Sinskey, Anthony J.

    1985-09-01

    The food processing industry is the oldest and largest industry using biotechnological processes. Further development of food products and processes based on biotechnology depends upon the improvement of existing processes, such as fermentation, immobilized biocatalyst technology, and production of additives and processing aids, as well as the development of new opportunities for food biotechnology. Improvements are needed in the characterization, safety, and quality control of food materials, in processing methods, in waste conversion and utilization processes, and in currently used food microorganism and tissue culture systems. Also needed are fundamental studies of the structure-function relationship of food materials and of the cell physiology and biochemistry of raw materials.

  12. What is a good public participation process? Five perspectives from the public.

    PubMed

    Webler, T; Tuler, S; Krueger, R

    2001-03-01

    It is now widely accepted that members of the public should be involved in environmental decision-making. This has inspired many to search for principles that characterize good public participation processes. In this paper we report on a study that identifies discourses about what defines a good process. Our case study was a forest planning process in northern New England and New York. We employed Q methodology to learn how participants characterize a good process differently, by selecting, defining, and privileging different principles. Five discourses, or perspectives, about good process emerged from our study. One perspective emphasizes that a good process acquires and maintains popular legitimacy. A second sees a good process as one that facilitates an ideological discussion. A third focuses on the fairness of the process. A fourth perspective conceptualizes participatory processes as a power struggle--in this instance a power play between local land-owning interests and outsiders. A fifth perspective highlights the need for leadership and compromise. Dramatic differences among these views suggest an important challenge for those responsible for designing and carrying out public participation processes. Conflicts may emerge about process designs because people disagree about what is good in specific contexts.

  13. Alternating event processes during lifetimes: population dynamics and statistical inference.

    PubMed

    Shinohara, Russell T; Sun, Yifei; Wang, Mei-Cheng

    2018-01-01

    In the literature studying recurrent event data, a large amount of work has been focused on univariate recurrent event processes where the occurrence of each event is treated as a single point in time. There are many applications, however, in which univariate recurrent events are insufficient to characterize the feature of the process because patients experience nontrivial durations associated with each event. This results in an alternating event process where the disease status of a patient alternates between exacerbations and remissions. In this paper, we consider the dynamics of a chronic disease and its associated exacerbation-remission process over two time scales: calendar time and time-since-onset. In particular, over calendar time, we explore population dynamics and the relationship between incidence, prevalence and duration for such alternating event processes. We provide nonparametric estimation techniques for characteristic quantities of the process. In some settings, exacerbation processes are observed from an onset time until death; to account for the relationship between the survival and alternating event processes, nonparametric approaches are developed for estimating exacerbation process over lifetime. By understanding the population dynamics and within-process structure, the paper provide a new and general way to study alternating event processes.

  14. Process mining in oncology using the MIMIC-III dataset

    NASA Astrophysics Data System (ADS)

    Prima Kurniati, Angelina; Hall, Geoff; Hogg, David; Johnson, Owen

    2018-03-01

    Process mining is a data analytics approach to discover and analyse process models based on the real activities captured in information systems. There is a growing body of literature on process mining in healthcare, including oncology, the study of cancer. In earlier work we found 37 peer-reviewed papers describing process mining research in oncology with a regular complaint being the limited availability and accessibility of datasets with suitable information for process mining. Publicly available datasets are one option and this paper describes the potential to use MIMIC-III, for process mining in oncology. MIMIC-III is a large open access dataset of de-identified patient records. There are 134 publications listed as using the MIMIC dataset, but none of them have used process mining. The MIMIC-III dataset has 16 event tables which are potentially useful for process mining and this paper demonstrates the opportunities to use MIMIC-III for process mining in oncology. Our research applied the L* lifecycle method to provide a worked example showing how process mining can be used to analyse cancer pathways. The results and data quality limitations are discussed along with opportunities for further work and reflection on the value of MIMIC-III for reproducible process mining research.

  15. Research on the technique of large-aperture off-axis parabolic surface processing using tri-station machine and its applicability.

    PubMed

    Zhang, Xin; Luo, Xiao; Hu, Haixiang; Zhang, Xuejun

    2015-09-01

    In order to process large-aperture aspherical mirrors, we designed and constructed a tri-station machine processing center with a three station device, which bears vectored feed motion of up to 10 axes. Based on this processing center, an aspherical mirror-processing model is proposed, in which each station implements traversal processing of large-aperture aspherical mirrors using only two axes, while the stations are switchable, thus lowering cost and enhancing processing efficiency. The applicability of the tri-station machine is also analyzed. At the same time, a simple and efficient zero-calibration method for processing is proposed. To validate the processing model, using our processing center, we processed an off-axis parabolic SiC mirror with an aperture diameter of 1450 mm. The experimental results indicate that, with a one-step iterative process, the peak to valley (PV) and root mean square (RMS) of the mirror converged from 3.441 and 0.5203 μm to 2.637 and 0.2962 μm, respectively, where the RMS reduced by 43%. The validity and high accuracy of the model are thereby demonstrated.

  16. Patterning of Indium Tin Oxide Films

    NASA Technical Reports Server (NTRS)

    Immer, Christopher

    2008-01-01

    A relatively rapid, economical process has been devised for patterning a thin film of indium tin oxide (ITO) that has been deposited on a polyester film. ITO is a transparent, electrically conductive substance made from a mixture of indium oxide and tin oxide that is commonly used in touch panels, liquid-crystal and plasma display devices, gas sensors, and solar photovoltaic panels. In a typical application, the ITO film must be patterned to form electrodes, current collectors, and the like. Heretofore it has been common practice to pattern an ITO film by means of either a laser ablation process or a photolithography/etching process. The laser ablation process includes the use of expensive equipment to precisely position and focus a laser. The photolithography/etching process is time-consuming. The present process is a variant of the direct toner process an inexpensive but often highly effective process for patterning conductors for printed circuits. Relative to a conventional photolithography/ etching process, this process is simpler, takes less time, and is less expensive. This process involves equipment that costs less than $500 (at 2005 prices) and enables patterning of an ITO film in a process time of less than about a half hour.

  17. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    NASA Astrophysics Data System (ADS)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dafler, J.R.; Sinnott, J.; Novil, M.

    The first phase of a study to identify candidate processes and products suitable for future exploitation using high-temperature solar energy is presented. This phase has been principally analytical, consisting of techno-economic studies, thermodynamic assessments of chemical reactions and processes, and the determination of market potentials for major chemical commodities that use significant amounts of fossil resources today. The objective was to identify energy-intensive processes that would be suitable for the production of chemicals and fuels using solar energy process heat. Of particular importance was the comparison of relative costs and energy requirements for the selected solar product versus costs formore » the product derived from conventional processing. The assessment methodology used a systems analytical approach to identify processes and products having the greatest potential for solar energy-thermal processing. This approach was used to establish the basis for work to be carried out in subsequent phases of development. It has been the intent of the program to divide the analysis and process identification into the following three distinct areas: (1) process selection, (2) process evaluation, and (3) ranking of processes. Four conventional processes were selected for assessment namely, methanol synthesis, styrene monomer production, vinyl chloride monomer production, and terephthalic acid production.« less

  19. An Application of X-Ray Fluorescence as Process Analytical Technology (PAT) to Monitor Particle Coating Processes.

    PubMed

    Nakano, Yoshio; Katakuse, Yoshimitsu; Azechi, Yasutaka

    2018-06-01

    An attempt to apply X-Ray Fluorescence (XRF) analysis to evaluate small particle coating process as a Process Analytical Technologies (PAT) was made. The XRF analysis was used to monitor coating level in small particle coating process with at-line manner. The small particle coating process usually consists of multiple coating processes. This study was conducted by a simple coating particles prepared by first coating of a model compound (DL-methionine) and second coating by talc on spherical microcrystalline cellulose cores. The particles with two layered coating are enough to demonstrate the small particle coating process. From the result by the small particle coating process, it was found that the XRF signal played different roles, resulting that XRF signals by first coating (layering) and second coating (mask coating) could demonstrate the extent with different mechanisms for the coating process. Furthermore, the particle coating of the different particle size has also been investigated to evaluate size effect of these coating processes. From these results, it was concluded that the XRF could be used as a PAT in monitoring particle coating processes and become powerful tool in pharmaceutical manufacturing.

  20. Single-Run Single-Mask Inductively-Coupled-Plasma Reactive-Ion-Etching Process for Fabricating Suspended High-Aspect-Ratio Microstructures

    NASA Astrophysics Data System (ADS)

    Yang, Yao-Joe; Kuo, Wen-Cheng; Fan, Kuang-Chao

    2006-01-01

    In this work, we present a single-run single-mask (SRM) process for fabricating suspended high-aspect-ratio structures on standard silicon wafers using an inductively coupled plasma-reactive ion etching (ICP-RIE) etcher. This process eliminates extra fabrication steps which are required for structure release after trench etching. Released microstructures with 120 μm thickness are obtained by this process. The corresponding maximum aspect ratio of the trench is 28. The SRM process is an extended version of the standard process proposed by BOSCH GmbH (BOSCH process). The first step of the SRM process is a standard BOSCH process for trench etching, then a polymer layer is deposited on trench sidewalls as a protective layer for the subsequent structure-releasing step. The structure is released by dry isotropic etching after the polymer layer on the trench floor is removed. All the steps can be integrated into a single-run ICP process. Also, only one mask is required. Therefore, the process complexity and fabrication cost can be effectively reduced. Discussions on each SRM step and considerations for avoiding undesired etching of the silicon structures during the release process are also presented.

  1. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.

  2. Discrete State Change Model of Manufacturing Quality to Aid Assembly Process Design

    NASA Astrophysics Data System (ADS)

    Koga, Tsuyoshi; Aoyama, Kazuhiro

    This paper proposes a representation model of the quality state change in an assembly process that can be used in a computer-aided process design system. In order to formalize the state change of the manufacturing quality in the assembly process, the functions, operations, and quality changes in the assembly process are represented as a network model that can simulate discrete events. This paper also develops a design method for the assembly process. The design method calculates the space of quality state change and outputs a better assembly process (better operations and better sequences) that can be used to obtain the intended quality state of the final product. A computational redesigning algorithm of the assembly process that considers the manufacturing quality is developed. The proposed method can be used to design an improved manufacturing process by simulating the quality state change. A prototype system for planning an assembly process is implemented and applied to the design of an auto-breaker assembly process. The result of the design example indicates that the proposed assembly process planning method outputs a better manufacturing scenario based on the simulation of the quality state change.

  3. Effect of simulated mechanical recycling processes on the structure and properties of poly(lactic acid).

    PubMed

    Beltrán, F R; Lorenzo, V; Acosta, J; de la Orden, M U; Martínez Urreaga, J

    2018-06-15

    The aim of this work is to study the effects of different simulated mechanical recycling processes on the structure and properties of PLA. A commercial grade of PLA was melt compounded and compression molded, then subjected to two different recycling processes. The first recycling process consisted of an accelerated ageing and a second melt processing step, while the other recycling process included an accelerated ageing, a demanding washing process and a second melt processing step. The intrinsic viscosity measurements indicate that both recycling processes produce a degradation in PLA, which is more pronounced in the sample subjected to the washing process. DSC results suggest an increase in the mobility of the polymer chains in the recycled materials; however the degree of crystallinity of PLA seems unchanged. The optical, mechanical and gas barrier properties of PLA do not seem to be largely affected by the degradation suffered during the different recycling processes. These results suggest that, despite the degradation of PLA, the impact of the different simulated mechanical recycling processes on the final properties is limited. Thus, the potential use of recycled PLA in packaging applications is not jeopardized. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Consumption of ultra-processed foods predicts diet quality in Canada.

    PubMed

    Moubarac, Jean-Claude; Batal, M; Louzada, M L; Martinez Steele, E; Monteiro, C A

    2017-01-01

    This study describes food consumption patterns in Canada according to the types of food processing using the Nova classification and investigates the association between consumption of ultra-processed foods and the nutrient profile of the diet. Dietary intakes of 33,694 individuals from the 2004 Canadian Community Health Survey aged 2 years and above were analyzed. Food and drinks were classified using Nova into unprocessed or minimally processed foods, processed culinary ingredients, processed foods and ultra-processed foods. Average consumption (total daily energy intake) and relative consumption (% of total energy intake) provided by each of the food groups were calculated. Consumption of ultra-processed foods according to sex, age, education, residential location and relative family revenue was assessed. Mean nutrient content of ultra-processed foods and non-ultra-processed foods were compared, and the average nutrient content of the overall diet across quintiles of dietary share of ultra-processed foods was measured. In 2004, 48% of calories consumed by Canadians came from ultra-processed foods. Consumption of such foods was high amongst all socioeconomic groups, and particularly in children and adolescents. As a group, ultra-processed foods were grossly nutritionally inferior to non-ultra-processed foods. After adjusting for covariates, a significant and positive relationship was found between the dietary share of ultra-processed foods and the content in carbohydrates, free sugars, total and saturated fats and energy density, while an inverse relationship was observed with the dietary content in protein, fiber, vitamins A, C, D, B6 and B12, niacin, thiamine, riboflavin, as well as zinc, iron, magnesium, calcium, phosphorus and potassium. Lowering the dietary share of ultra-processed foods and raising consumption of hand-made meals from unprocessed or minimally processed foods would substantially improve the diet quality of Canadian. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Development of Statistical Process Control Methodology for an Environmentally Compliant Surface Cleaning Process in a Bonding Laboratory

    NASA Technical Reports Server (NTRS)

    Hutchens, Dale E.; Doan, Patrick A.; Boothe, Richard E.

    1997-01-01

    Bonding labs at both MSFC and the northern Utah production plant prepare bond test specimens which simulate or witness the production of NASA's Reusable Solid Rocket Motor (RSRM). The current process for preparing the bonding surfaces employs 1,1,1-trichloroethane vapor degreasing, which simulates the current RSRM process. Government regulations (e.g., the 1990 Amendments to the Clean Air Act) have mandated a production phase-out of a number of ozone depleting compounds (ODC) including 1,1,1-trichloroethane. In order to comply with these regulations, the RSRM Program is qualifying a spray-in-air (SIA) precision cleaning process using Brulin 1990, an aqueous blend of surfactants. Accordingly, surface preparation prior to bonding process simulation test specimens must reflect the new production cleaning process. The Bonding Lab Statistical Process Control (SPC) program monitors the progress of the lab and its capabilities, as well as certifies the bonding technicians, by periodically preparing D6AC steel tensile adhesion panels with EA-91 3NA epoxy adhesive using a standardized process. SPC methods are then used to ensure the process is statistically in control, thus producing reliable data for bonding studies, and identify any problems which might develop. Since the specimen cleaning process is being changed, new SPC limits must be established. This report summarizes side-by-side testing of D6AC steel tensile adhesion witness panels and tapered double cantilevered beams (TDCBs) using both the current baseline vapor degreasing process and a lab-scale spray-in-air process. A Proceco 26 inches Typhoon dishwasher cleaned both tensile adhesion witness panels and TDCBs in a process which simulates the new production process. The tests were performed six times during 1995, subsequent statistical analysis of the data established new upper control limits (UCL) and lower control limits (LCL). The data also demonstrated that the new process was equivalent to the vapor degreasing process.

  6. Evaluation of stabilization techniques for ion implant processing

    NASA Astrophysics Data System (ADS)

    Ross, Matthew F.; Wong, Selmer S.; Minter, Jason P.; Marlowe, Trey; Narcy, Mark E.; Livesay, William R.

    1999-06-01

    With the integration of high current ion implant processing into volume CMOS manufacturing, the need for photoresist stabilization to achieve a stable ion implant process is critical. This study compares electron beam stabilization, a non-thermal process, with more traditional thermal stabilization techniques such as hot plate baking and vacuum oven processing. The electron beam processing is carried out in a flood exposure system with no active heating of the wafer. These stabilization techniques are applied to typical ion implant processes that might be found in a CMOS production process flow. The stabilization processes are applied to a 1.1 micrometers thick PFI-38A i-line photoresist film prior to ion implant processing. Post stabilization CD variation is detailed with respect to wall slope and feature integrity. SEM photographs detail the effects of the stabilization technique on photoresist features. The thermal stability of the photoresist is shown for different levels of stabilization and post stabilization thermal cycling. Thermal flow stability of the photoresist is detailed via SEM photographs. A significant improvement in thermal stability is achieved with the electron beam process, such that photoresist features are stable to temperatures in excess of 200 degrees C. Ion implant processing parameters are evaluated and compared for the different stabilization methods. Ion implant system end-station chamber pressure is detailed as a function of ion implant process and stabilization condition. The ion implant process conditions are detailed for varying factors such as ion current, energy, and total dose. A reduction in the ion implant systems end-station chamber pressure is achieved with the electron beam stabilization process over the other techniques considered. This reduction in end-station chamber pressure is shown to provide a reduction in total process time for a given ion implant dose. Improvements in the ion implant process are detailed across several combinations of current and energy.

  7. The prevalence of medial coronoid process disease is high in lame large breed dogs and quantitative radiographic assessments contribute to the diagnosis.

    PubMed

    Mostafa, Ayman; Nolte, Ingo; Wefstaedt, Patrick

    2018-06-05

    Medial coronoid process disease is a common leading cause of thoracic limb lameness in dogs. Computed tomography and arthroscopy are superior to radiography to diagnose medial coronoid process disease, however, radiography remains the most available diagnostic imaging modality in veterinary practice. Objectives of this retrospective observational study were to describe the prevalence of medial coronoid process disease in lame large breed dogs and apply a novel method for quantifying the radiographic changes associated with medial coronoid process and subtrochlear-ulnar region in Labrador and Golden Retrievers with confirmed medial coronoid process disease. Purebred Labrador and Golden Retrievers (n = 143, 206 elbows) without and with confirmed medial coronoid process disease were included. The prevalence of medial coronoid process disease in lame large breed dogs was calculated. Mediolateral and craniocaudal radiographs of elbows were analyzed to assess the medial coronoid process length and morphology, and subtrochlear-ulnar width. Mean grayscale value was calculated for radial and subtrochlear-ulnar zones. The prevalence of medial coronoid process disease was 20.8%. Labrador and Golden Retrievers were the most affected purebred dogs (29.6%). Elbows with confirmed medial coronoid process disease had short (P < 0.0001) and deformed (∼95%) medial coronoid process, with associated medial coronoid process osteophytosis (7.5%). Subtrochlear-ulnar sclerosis was evidenced in ∼96% of diseased elbows, with a significant increase (P < 0.0001) in subtrochlear-ulnar width and standardized grayscale value. Radial grayscale value did not differ between groups. Periarticular osteophytosis was identified in 51.4% of elbows with medial coronoid process disease. Medial coronoid process length and morphology, and subtrochlear-ulnar width and standardized grayscale value varied significantly in dogs with confirmed medial coronoid process disease compared to controls. Findings indicated that medial coronoid process disease has a high prevalence in lame large breed dogs and that quantitative radiographic assessments can contribute to the diagnosis. © 2018 American College of Veterinary Radiology.

  8. The role of rational and experiential processing in influencing the framing effect.

    PubMed

    Stark, Emily; Baldwin, Austin S; Hertel, Andrew W; Rothman, Alexander J

    2017-01-01

    Research on individual differences and the framing effect has focused primarily on how variability in rational processing influences choice. However, we propose that measuring only rational processing presents an incomplete picture of how participants are responding to framed options, as orthogonal individual differences in experiential processing might be relevant. In two studies, we utilize the Rational Experiential Inventory, which captures individual differences in rational and experiential processing, to investigate how both processing types influence decisions. Our results show that differences in experiential processing, but not rational processing, moderated the effect of frame on choice. We suggest that future research should more closely examine the influence of experiential processing on making decisions, to gain a broader understanding of the conditions that contribute to the framing effect.

  9. Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.

    1996-01-01

    This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.

  10. Application of statistical process control and process capability analysis procedures in orbiter processing activities at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.

    1994-01-01

    Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.

  11. Separate cortical networks involved in music perception: preliminary functional MRI evidence for modularity of music processing.

    PubMed

    Schmithorst, Vincent J

    2005-04-01

    Music perception is a quite complex cognitive task, involving the perception and integration of various elements including melody, harmony, pitch, rhythm, and timbre. A preliminary functional MRI investigation of music perception was performed, using a simplified passive listening task. Group independent component analysis (ICA) was used to separate out various components involved in music processing, as the hemodynamic responses are not known a priori. Various components consistent with auditory processing, expressive language, syntactic processing, and visual association were found. The results are discussed in light of various hypotheses regarding modularity of music processing and its overlap with language processing. The results suggest that, while some networks overlap with ones used for language processing, music processing may involve its own domain-specific processing subsystems.

  12. Industrial implementation of spatial variability control by real-time SPC

    NASA Astrophysics Data System (ADS)

    Roule, O.; Pasqualini, F.; Borde, M.

    2016-10-01

    Advanced technology nodes require more and more information to get the wafer process well setup. The critical dimension of components decreases following Moore's law. At the same time, the intra-wafer dispersion linked to the spatial non-uniformity of tool's processes is not capable to decrease in the same proportions. APC systems (Advanced Process Control) are being developed in waferfab to automatically adjust and tune wafer processing, based on a lot of process context information. It can generate and monitor complex intrawafer process profile corrections between different process steps. It leads us to put under control the spatial variability, in real time by our SPC system (Statistical Process Control). This paper will outline the architecture of an integrated process control system for shape monitoring in 3D, implemented in waferfab.

  13. Modeling and analysis of power processing systems: Feasibility investigation and formulation of a methodology

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.

    1974-01-01

    A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.

  14. Laser displacement sensor to monitor the layup process of composite laminate production

    NASA Astrophysics Data System (ADS)

    Miesen, Nick; Groves, Roger M.; Sinke, Jos; Benedictus, Rinze

    2013-04-01

    Several types of flaw can occur during the layup process of prepreg composite laminates. Quality control after the production process checks the end product by testing the specimens for flaws which are included during the layup process or curing process, however by then these flaws are already irreversibly embedded in the laminate. This paper demonstrates the use of a laser displacement sensor technique applied during the layup process of prepreg laminates for in-situ flaw detection, for typical flaws that can occur during the composite production process. An incorrect number of layers and fibre wrinkling are dominant flaws during the process of layup. These and other dominant flaws have been modeled to determine the requirements for an in-situ monitoring during the layup process of prepreg laminates.

  15. Levels of integration in cognitive control and sequence processing in the prefrontal cortex.

    PubMed

    Bahlmann, Jörg; Korb, Franziska M; Gratton, Caterina; Friederici, Angela D

    2012-01-01

    Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex.

  16. Levels of Integration in Cognitive Control and Sequence Processing in the Prefrontal Cortex

    PubMed Central

    Bahlmann, Jörg; Korb, Franziska M.; Gratton, Caterina; Friederici, Angela D.

    2012-01-01

    Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex. PMID:22952762

  17. Flow chemistry using milli- and microstructured reactors-from conventional to novel process windows.

    PubMed

    Illg, Tobias; Löb, Patrick; Hessel, Volker

    2010-06-01

    The terminology Novel Process Window unites different methods to improve existing processes by applying unconventional and harsh process conditions like: process routes at much elevated pressure, much elevated temperature, or processing in a thermal runaway regime to achieve a significant impact on process performance. This paper is a review of parts of IMM's works in particular the applicability of above mentioned Novel Process Windows on selected chemical reactions. First, general characteristics of microreactors are discussed like excellent mass and heat transfer and improved mixing quality. Different types of reactions are presented in which the use of microstructured devices led to an increased process performance by applying Novel Process Windows. These examples were chosen to demonstrate how chemical reactions can benefit from the use of milli- and microstructured devices and how existing protocols can be changed toward process conditions hitherto not applicable in standard laboratory equipment. The used milli- and microstructured reactors can also offer advantages in other areas, for example, high-throughput screening of catalysts and better control of size distribution in a particle synthesis process by improved mixing, etc. The chemical industry is under continuous improvement. So, a lot of research is being done to synthesize high value chemicals, to optimize existing processes in view of process safety and energy consumption and to search for new routes to produce such chemicals. Leitmotifs of such undertakings are often sustainable development(1) and Green Chemistry(2).

  18. Fast but fleeting: adaptive motor learning processes associated with aging and cognitive decline.

    PubMed

    Trewartha, Kevin M; Garcia, Angeles; Wolpert, Daniel M; Flanagan, J Randall

    2014-10-01

    Motor learning has been shown to depend on multiple interacting learning processes. For example, learning to adapt when moving grasped objects with novel dynamics involves a fast process that adapts and decays quickly-and that has been linked to explicit memory-and a slower process that adapts and decays more gradually. Each process is characterized by a learning rate that controls how strongly motor memory is updated based on experienced errors and a retention factor determining the movement-to-movement decay in motor memory. Here we examined whether fast and slow motor learning processes involved in learning novel dynamics differ between younger and older adults. In addition, we investigated how age-related decline in explicit memory performance influences learning and retention parameters. Although the groups adapted equally well, they did so with markedly different underlying processes. Whereas the groups had similar fast processes, they had different slow processes. Specifically, the older adults exhibited decreased retention in their slow process compared with younger adults. Within the older group, who exhibited considerable variation in explicit memory performance, we found that poor explicit memory was associated with reduced retention in the fast process, as well as the slow process. These findings suggest that explicit memory resources are a determining factor in impairments in the both the fast and slow processes for motor learning but that aging effects on the slow process are independent of explicit memory declines. Copyright © 2014 the authors 0270-6474/14/3413411-11$15.00/0.

  19. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  20. OCLC-MARC Tape Processing: A Functional Analysis.

    ERIC Educational Resources Information Center

    Miller, Bruce Cummings

    1984-01-01

    Analyzes structure of, and data in, the OCLC-MARC record in the form delivered via OCLC's Tape Subscription Service, and outlines important processing functions involved: "unreadable tapes," duplicate records and deduping, match processing, choice processing, locations processing, "automatic" and "input" stamps,…

  1. 7 Processes that Enable NASA Software Engineering Technologies: Value-Added Process Engineering

    NASA Technical Reports Server (NTRS)

    Housch, Helen; Godfrey, Sally

    2011-01-01

    The presentation reviews Agency process requirements and the purpose, benefits, and experiences or seven software engineering processes. The processes include: product integration, configuration management, verification, software assurance, measurement and analysis, requirements management, and planning and monitoring.

  2. Risk-based Strategy to Determine Testing Requirement for the Removal of Residual Process Reagents as Process-related Impurities in Bioprocesses.

    PubMed

    Qiu, Jinshu; Li, Kim; Miller, Karen; Raghani, Anil

    2015-01-01

    The purpose of this article is to recommend a risk-based strategy for determining clearance testing requirements of the process reagents used in manufacturing biopharmaceutical products. The strategy takes account of four risk factors. Firstly, the process reagents are classified into two categories according to their safety profile and history of use: generally recognized as safe (GRAS) and potential safety concern (PSC) reagents. The clearance testing of GRAS reagents can be eliminated because of their safe use historically and process capability to remove these reagents. An estimated safety margin (Se) value, a ratio of the exposure limit to the estimated maximum reagent amount, is then used to evaluate the necessity for testing the PSC reagents at an early development stage. The Se value is calculated from two risk factors, the starting PSC reagent amount per maximum product dose (Me), and the exposure limit (Le). A worst-case scenario is assumed to estimate the Me value, that is common. The PSC reagent of interest is co-purified with the product and no clearance occurs throughout the entire purification process. No clearance testing is required for this PSC reagent if its Se value is ≥1; otherwise clearance testing is needed. Finally, the point of the process reagent introduction to the process is also considered in determining the necessity of the clearance testing for process reagents. How to use the measured safety margin as a criterion for determining PSC reagent testing at process characterization, process validation, and commercial production stages are also described. A large number of process reagents are used in the biopharmaceutical manufacturing to control the process performance. Clearance testing for all of the process reagents will be an enormous analytical task. In this article, a risk-based strategy is described to eliminate unnecessary clearance testing for majority of the process reagents using four risk factors. The risk factors included in the strategy are (i) safety profile of the reagents, (ii) the starting amount of the process reagents used in the manufacturing process, (iii) the maximum dose of the product, and (iv) the point of introduction of the process reagents in the process. The implementation of the risk-based strategy can eliminate clearance testing for approximately 90% of the process reagents used in the manufacturing processes. This science-based strategy allows us to ensure patient safety and meet regulatory agency expectations throughout the product development life cycle. © PDA, Inc. 2015.

  3. Titania nanotube powders obtained by rapid breakdown anodization in perchloric acid electrolytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Saima, E-mail: saima.ali@aalto.fi; Hannula, Simo-Pekka

    Titania nanotube (TNT) powders are prepared by rapid break down anodization (RBA) in a 0.1 M perchloric acid (HClO{sub 4}) solution (Process 1), and ethylene glycol (EG) mixture with HClO{sub 4} and water (Process 2). A study of the as-prepared and calcined TNT powders obtained by both processes is implemented to evaluate and compare the morphology, crystal structure, specific surface area, and the composition of the nanotubes. Longer TNTs are formed in Process 1, while comparatively larger pore diameter and wall thickness are obtained for the nanotubes prepared by Process 2. The TNTs obtained by Process 1 are converted tomore » nanorods at 350 °C, while nanotubes obtained by Process 2 preserve tubular morphology till 350 °C. In addition, the TNTs prepared by an aqueous electrolyte have a crystalline structure, whereas the TNTs obtained by Process 2 are amorphous. Samples calcined till 450 °C have XRD peaks from the anatase phase, while the rutile phase appears at 550 °C for the TNTs prepared by both processes. The Raman spectra also show clear anatase peaks for all samples except the as-prepared sample obtained by Process 2, thus supporting the XRD findings. FTIR spectra reveal the presence of O-H groups in the structure for the TNTs obtained by both processes. However, the presence is less prominent for annealed samples. Additionally, TNTs obtained by Process 2 have a carbonaceous impurity present in the structure attributed to the electrolyte used in that process. While a negligible weight loss is typical for TNTs prepared from aqueous electrolytes, a weight loss of 38.6% in the temperature range of 25–600 °C is found for TNTs prepared in EG electrolyte (Process 2). A large specific surface area of 179.2 m{sup 2} g{sup −1} is obtained for TNTs prepared by Process 1, whereas Process 2 produces nanotubes with a lower specific surface area. The difference appears to correspond to the dimensions of the nanotubes obtained by the two processes. - Graphical abstract: Titania nanotube powders prepared by Process 1 and Process 2 have different crystal structure and specific surface area. - Highlights: • Titania nanotube (TNT) powder is prepared in low water organic electrolyte. • Characterization of TNT powders prepared from aqueous and organic electrolyte. • TNTs prepared by Process 1 are crystalline with higher specific surface area. • TNTs obtained by Process 2 have carbonaceous impurities in the structure.« less

  4. A processing approach to the working memory/long-term memory distinction: evidence from the levels-of-processing span task.

    PubMed

    Rose, Nathan S; Craik, Fergus I M

    2012-07-01

    Recent theories suggest that performance on working memory (WM) tasks involves retrieval from long-term memory (LTM). To examine whether WM and LTM tests have common principles, Craik and Tulving's (1975) levels-of-processing paradigm, which is known to affect LTM, was administered as a WM task: Participants made uppercase, rhyme, or category-membership judgments about words, and immediate recall of the words was required after every 3 or 8 processing judgments. In Experiment 1, immediate recall did not demonstrate a levels-of-processing effect, but a subsequent LTM test (delayed recognition) of the same words did show a benefit of deeper processing. Experiment 2 showed that surprise immediate recall of 8-item lists did demonstrate a levels-of-processing effect, however. A processing account of the conditions in which levels-of-processing effects are and are not found in WM tasks was advanced, suggesting that the extent to which levels-of-processing effects are similar between WM and LTM tests largely depends on the amount of disruption to active maintenance processes. 2012 APA, all rights reserved

  5. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing

    PubMed Central

    Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I.; Ghassemzadeh, Habib; Joanette, Yves

    2015-01-01

    Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word’s perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output. PMID:26217288

  6. Process Monitoring Evaluation and Implementation for the Wood Abrasive Machining Process

    PubMed Central

    Saloni, Daniel E.; Lemaster, Richard L.; Jackson, Steven D.

    2010-01-01

    Wood processing industries have continuously developed and improved technologies and processes to transform wood to obtain better final product quality and thus increase profits. Abrasive machining is one of the most important of these processes and therefore merits special attention and study. The objective of this work was to evaluate and demonstrate a process monitoring system for use in the abrasive machining of wood and wood based products. The system developed increases the life of the belt by detecting (using process monitoring sensors) and removing (by cleaning) the abrasive loading during the machining process. This study focused on abrasive belt machining processes and included substantial background work, which provided a solid base for understanding the behavior of the abrasive, and the different ways that the abrasive machining process can be monitored. In addition, the background research showed that abrasive belts can effectively be cleaned by the appropriate cleaning technique. The process monitoring system developed included acoustic emission sensors which tended to be sensitive to belt wear, as well as platen vibration, but not loading, and optical sensors which were sensitive to abrasive loading. PMID:22163477

  7. Adaptive memory: determining the proximate mechanisms responsible for the memorial advantages of survival processing.

    PubMed

    Burns, Daniel J; Burns, Sarah A; Hwang, Ana J

    2011-01-01

    J. S. Nairne, S. R. Thompson, and J. N. S. Pandeirada (2007) suggested that our memory systems may have evolved to help us remember fitness-relevant information and showed that retention of words rated for their relevance to survival is superior to that of words encoded under other deep processing conditions. The authors present 4 experiments that uncover the proximate mechanisms likely responsible. The authors obtained a recall advantage for survival processing compared with conditions that promoted only item-specific processing or only relational processing. This effect was eliminated when control conditions encouraged both item-specific and relational processing. Data from separate measures of item-specific and relational processing generally were consistent with the view that the memorial advantage for survival processing results from the encoding of both types of processing. Although the present study suggests the proximate mechanisms for the effect, the authors argue that survival processing may be fundamentally different from other memory phenomena for which item-specific and relational processing differences have been implicated. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  8. Implementation of quality by design toward processing of food products.

    PubMed

    Rathore, Anurag S; Kapoor, Gautam

    2017-05-28

    Quality by design (QbD) is a systematic approach that begins with predefined objectives and emphasizes product and process understanding and process control. It is an approach based on principles of sound science and quality risk management. As the food processing industry continues to embrace the idea of in-line, online, and/or at-line sensors and real-time characterization for process monitoring and control, the existing gaps with regard to our ability to monitor multiple parameters/variables associated with the manufacturing process will be alleviated over time. Investments made for development of tools and approaches that facilitate high-throughput analytical and process development, process analytical technology, design of experiments, risk analysis, knowledge management, and enhancement of process/product understanding would pave way for operational and economic benefits later in the commercialization process and across other product pipelines. This article aims to achieve two major objectives. First, to review the progress that has been made in the recent years on the topic of QbD implementation in processing of food products and second, present a case study that illustrates benefits of such QbD implementation.

  9. Energy saving processes for nitrogen removal in organic wastewater from food processing industries in Thailand.

    PubMed

    Johansen, N H; Suksawad, N; Balslev, P

    2004-01-01

    Nitrogen removal from organic wastewater is becoming a demand in developed communities. The use of nitrite as intermediate in the treatment of wastewater has been largely ignored, but is actually a relevant energy saving process compared to conventional nitrification/denitrification using nitrate as intermediate. Full-scale results and pilot-scale results using this process are presented. The process needs some additional process considerations and process control to be utilized. Especially under tropical conditions the nitritation process will round easily, and it must be expected that many AS treatment plants in the food industry already produce NO2-N. This uncontrolled nitrogen conversion can be the main cause for sludge bulking problems. It is expected that sludge bulking problems in many cases can be solved just by changing the process control in order to run a more consequent nitritation. Theoretically this process will decrease the oxygen consumption for oxidation by 25% and the use of carbon source for the reduction will be decreased by 40% compared to the conventional process.

  10. Application of Ozone MBBR Process in Refinery Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Lin, Wang

    2018-01-01

    Moving Bed Biofilm Reactor (MBBR) is a kind of sewage treatment technology based on fluidized bed. At the same time, it can also be regarded as an efficient new reactor between active sludge method and the biological membrane method. The application of ozone MBBR process in refinery wastewater treatment is mainly studied. The key point is to design the ozone +MBBR combined process based on MBBR process. The ozone +MBBR process is used to analyze the treatment of concentrated water COD discharged from the refinery wastewater treatment plant. The experimental results show that the average removal rate of COD is 46.0%~67.3% in the treatment of reverse osmosis concentrated water by ozone MBBR process, and the effluent can meet the relevant standard requirements. Compared with the traditional process, the ozone MBBR process is more flexible. The investment of this process is mainly ozone generator, blower and so on. The prices of these items are relatively inexpensive, and these costs can be offset by the excess investment in traditional activated sludge processes. At the same time, ozone MBBR process has obvious advantages in water quality, stability and other aspects.

  11. Models of recognition: A review of arguments in favor of a dual-process account

    PubMed Central

    DIANA, RACHEL A.; REDER, LYNNE M.; ARNDT, JASON; PARK, HEEKYEONG

    2008-01-01

    The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models. PMID:16724763

  12. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing.

    PubMed

    Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I; Ghassemzadeh, Habib; Joanette, Yves

    2015-01-01

    Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word's perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output.

  13. Techno-economic analysis of biocatalytic processes for production of alkene expoxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borole, Abhijeet P

    2007-01-01

    A techno-economic analysis of two different bioprocesses was conducted, one for the conversion of propylene to propylene oxide (PO) and other for conversion of styrene to styrene expoxide (SO). The first process was a lipase-mediated chemo-enzymatic reaction, whereas the second one was a one-step enzymatic process using chloroperoxidase. The PO produced through the chemo-enzymatic process is a racemic product, whereas the latter process (based on chloroperoxidase) produces an enantio-pure product. The former process thus falls under the category of high-volume commodity chemical (PO); whereas the latter is a low-volume, high-value product (SO).A simulation of the process was conducted using themore » bioprocess engineering software SuperPro Designer v6.0 (Intelligen, Inc., Scotch Plains, NJ) to determine the economic feasibility of the process. The purpose of the exercise was to compare biocatalytic processes with existing chemical processes for production of alkene expoxides. The results show that further improvements are needed in improving biocatalyst stability to make these bioprocesses competitive with chemical processes.« less

  14. The representation of conceptual knowledge: visual, auditory, and olfactory imagery compared with semantic processing.

    PubMed

    Palmiero, Massimiliano; Di Matteo, Rosalia; Belardinelli, Marta Olivetti

    2014-05-01

    Two experiments comparing imaginative processing in different modalities and semantic processing were carried out to investigate the issue of whether conceptual knowledge can be represented in different format. Participants were asked to judge the similarity between visual images, auditory images, and olfactory images in the imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in both experiments. The degree of similarity between the imaginative and semantic items was changed across experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the visual imaginative processing could be differentiated from the semantic processing in terms of accuracy. These results showed that the visual and auditory imaginative processing can be differentiated from the semantic processing, although both visual and auditory images strongly rely on semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are discussed in the frame of the imagery debate.

  15. Working memory load eliminates the survival processing effect.

    PubMed

    Kroneisen, Meike; Rummel, Jan; Erdfelder, Edgar

    2014-01-01

    In a series of experiments, Nairne, Thompson, and Pandeirada (2007) demonstrated that words judged for their relevance to a survival scenario are remembered better than words judged for a scenario not relevant on a survival dimension. They explained this survival-processing effect by arguing that nature "tuned" our memory systems to process and remember fitness-relevant information. Kroneisen and Erdfelder (2011) proposed that it may not be survival processing per se that facilitates recall but the richness and distinctiveness with which information is encoded. To further test this account, we investigated how the survival processing effect is affected by cognitive load. If the survival processing effect is due to automatic processes or, alternatively, if survival processing is routinely prioritized in dual-task contexts, we would expect this effect to persist under cognitive load conditions. If the effect relies on cognitively demanding processes like richness and distinctiveness of encoding, however, the survival processing benefit should be hampered by increased cognitive load during encoding. Results were in line with the latter prediction, that is, the survival processing effect vanished under dual-task conditions.

  16. E-learning process maturity level: a conceptual framework

    NASA Astrophysics Data System (ADS)

    Rahmah, A.; Santoso, H. B.; Hasibuan, Z. A.

    2018-03-01

    ICT advancement is a sure thing with the impact influencing many domains, including learning in both formal and informal situations. It leads to a new mindset that we should not only utilize the given ICT to support the learning process, but also improve it gradually involving a lot of factors. These phenomenon is called e-learning process evolution. Accordingly, this study attempts to explore maturity level concept to provide the improvement direction gradually and progression monitoring for the individual e-learning process. Extensive literature review, observation, and forming constructs are conducted to develop a conceptual framework for e-learning process maturity level. The conceptual framework consists of learner, e-learning process, continuous improvement, evolution of e-learning process, technology, and learning objectives. Whilst, evolution of e-learning process depicted as current versus expected conditions of e-learning process maturity level. The study concludes that from the e-learning process maturity level conceptual framework, it may guide the evolution roadmap for e-learning process, accelerate the evolution, and decrease the negative impact of ICT. The conceptual framework will be verified and tested in the future study.

  17. Heat input and accumulation for ultrashort pulse processing with high average power

    NASA Astrophysics Data System (ADS)

    Finger, Johannes; Bornschlegel, Benedikt; Reininghaus, Martin; Dohrn, Andreas; Nießen, Markus; Gillner, Arnold; Poprawe, Reinhart

    2018-05-01

    Materials processing using ultrashort pulsed laser radiation with pulse durations <10 ps is known to enable very precise processing with negligible thermal load. However, even for the application of picosecond and femtosecond laser radiation, not the full amount of the absorbed energy is converted into ablation products and a distinct fraction of the absorbed energy remains as residual heat in the processed workpiece. For low average power and power densities, this heat is usually not relevant for the processing results and dissipates into the workpiece. In contrast, when higher average powers and repetition rates are applied to increase the throughput and upscale ultrashort pulse processing, this heat input becomes relevant and significantly affects the achieved processing results. In this paper, we outline the relevance of heat input for ultrashort pulse processing, starting with the heat input of a single ultrashort laser pulse. Heat accumulation during ultrashort pulse processing with high repetition rate is discussed as well as heat accumulation for materials processing using pulse bursts. In addition, the relevance of heat accumulation with multiple scanning passes and processing with multiple laser spots is shown.

  18. Defining and reconstructing clinical processes based on IHE and BPMN 2.0.

    PubMed

    Strasser, Melanie; Pfeifer, Franz; Helm, Emmanuel; Schuler, Andreas; Altmann, Josef

    2011-01-01

    This paper describes the current status and the results of our process management system for defining and reconstructing clinical care processes, which contributes to compare, analyze and evaluate clinical processes and further to identify high cost tasks or stays. The system is founded on IHE, which guarantees standardized interfaces and interoperability between clinical information systems. At the heart of the system there is BPMN, a modeling notation and specification language, which allows the definition and execution of clinical processes. The system provides functionality to define healthcare information system independent clinical core processes and to execute the processes in a workflow engine. Furthermore, the reconstruction of clinical processes is done by evaluating an IHE audit log database, which records patient movements within a health care facility. The main goal of the system is to assist hospital operators and clinical process managers to detect discrepancies between defined and actual clinical processes and as well to identify main causes of high medical costs. Beyond that, the system can potentially contribute to reconstruct and improve clinical processes and enhance cost control and patient care quality.

  19. Process qualification and testing of LENS deposited AY1E0125 D-bottle brackets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atwood, Clinton J.; Smugeresky, John E.; Jew, Michael

    2006-11-01

    The LENS Qualification team had the goal of performing a process qualification for the Laser Engineered Net Shaping{trademark}(LENS{reg_sign}) process. Process Qualification requires that a part be selected for process demonstration. The AY1E0125 D-Bottle Bracket from the W80-3 was selected for this work. The repeatability of the LENS process was baselined to determine process parameters. Six D-Bottle brackets were deposited using LENS, machined to final dimensions, and tested in comparison to conventionally processed brackets. The tests, taken from ES1E0003, included a mass analysis and structural dynamic testing including free-free and assembly-level modal tests, and Haversine shock tests. The LENS brackets performedmore » with very similar characteristics to the conventionally processed brackets. Based on the results of the testing, it was concluded that the performance of the brackets made them eligible for parallel path testing in subsystem level tests. The testing results and process rigor qualified the LENS process as detailed in EER200638525A.« less

  20. Sustainability assessment of shielded metal arc welding (SMAW) process

    NASA Astrophysics Data System (ADS)

    Alkahla, Ibrahim; Pervaiz, Salman

    2017-09-01

    Shielded metal arc welding (SMAW) process is one of the most commonly employed material joining processes utilized in the various industrial sectors such as marine, ship-building, automotive, aerospace, construction and petrochemicals etc. The increasing pressure on manufacturing sector wants the welding process to be sustainable in nature. The SMAW process incorporates several types of inputs and output streams. The sustainability concerns associated with SMAW process are linked with the various input and output streams such as electrical energy requirement, input material consumptions, slag formation, fumes emission and hazardous working conditions associated with the human health and occupational safety. To enhance the environmental performance of the SMAW welding process, there is a need to characterize the sustainability for the SMAW process under the broad framework of sustainability. Most of the available literature focuses on the technical and economic aspects of the welding process, however the environmental and social aspects are rarely addressed. The study reviews SMAW process with respect to the triple bottom line (economic, environmental and social) sustainability approach. Finally, the study concluded recommendations towards achieving economical and sustainable SMAW welding process.

  1. Decontamination and disposal of PCB wastes.

    PubMed Central

    Johnston, L E

    1985-01-01

    Decontamination and disposal processes for PCB wastes are reviewed. Processes are classed as incineration, chemical reaction or decontamination. Incineration technologies are not limited to the rigorous high temperature but include those where innovations in use of oxident, heat transfer and residue recycle are made. Chemical processes include the sodium processes, radiant energy processes and low temperature oxidations. Typical processing rates and associated costs are provided where possible. PMID:3928363

  2. Logistics Control Facility: A Normative Model for Total Asset Visibility in the Air Force Logistics System

    DTIC Science & Technology

    1994-09-01

    IIssue Computers, information systems, and communication systems are being increasingly used in transportation, warehousing, order processing , materials...inventory levels, reduced order processing times, reduced order processing costs, and increased customer satisfaction. While purchasing and transportation...process, the speed in which crders are processed would increase significantly. Lowering the order processing time in turn lowers the lead time, which in

  3. Definition and documentation of engineering processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, G.W.

    1997-11-01

    This tutorial is an extract of a two-day workshop developed under the auspices of the Quality Engineering Department at Sandia National Laboratories. The presentation starts with basic definitions and addresses why processes should be defined and documented. It covers three primary topics: (1) process considerations and rationale, (2) approach to defining and documenting engineering processes, and (3) an IDEFO model of the process for defining engineering processes.

  4. Method for enhanced atomization of liquids

    DOEpatents

    Thompson, Richard E.; White, Jerome R.

    1993-01-01

    In a process for atomizing a slurry or liquid process stream in which a slurry or liquid is passed through a nozzle to provide a primary atomized process stream, an improvement which comprises subjecting the liquid or slurry process stream to microwave energy as the liquid or slurry process stream exits the nozzle, wherein sufficient microwave heating is provided to flash vaporize the primary atomized process stream.

  5. Rethinking a Negative Event: The Affective Impact of Ruminative versus Imagery-Based Processing of Aversive Autobiographical Memories.

    PubMed

    Slofstra, Christien; Eisma, Maarten C; Holmes, Emily A; Bockting, Claudi L H; Nauta, Maaike H

    2017-01-01

    Ruminative (abstract verbal) processing during recall of aversive autobiographical memories may serve to dampen their short-term affective impact. Experimental studies indeed demonstrate that verbal processing of non-autobiographical material and positive autobiographical memories evokes weaker affective responses than imagery-based processing. In the current study, we hypothesized that abstract verbal or concrete verbal processing of an aversive autobiographical memory would result in weaker affective responses than imagery-based processing. The affective impact of abstract verbal versus concrete verbal versus imagery-based processing during recall of an aversive autobiographical memory was investigated in a non-clinical sample ( n  = 99) using both an observational and an experimental design. Observationally, it was examined whether spontaneous use of processing modes (both state and trait measures) was associated with impact of aversive autobiographical memory recall on negative and positive affect. Experimentally, the causal relation between processing modes and affective impact was investigated by manipulating the processing mode during retrieval of the same aversive autobiographical memory. Main findings were that higher levels of trait (but not state) measures of both ruminative and imagery-based processing and depressive symptomatology were positively correlated with higher levels of negative affective impact in the observational part of the study. In the experimental part, no main effect of processing modes on affective impact of autobiographical memories was found. However, a significant moderating effect of depressive symptomatology was found. Only for individuals with low levels of depressive symptomatology, concrete verbal (but not abstract verbal) processing of the aversive autobiographical memory did result in weaker affective responses, compared to imagery-based processing. These results cast doubt on the hypothesis that ruminative processing of aversive autobiographical memories serves to avoid the negative emotions evoked by such memories. Furthermore, findings suggest that depressive symptomatology is associated with the spontaneous use and the affective impact of processing modes during recall of aversive autobiographical memories. Clinical studies are needed that examine the role of processing modes during aversive autobiographical memory recall in depression, including the potential effectiveness of targeting processing modes in therapy.

  6. Active pharmaceutical ingredient (API) production involving continuous processes--a process system engineering (PSE)-assisted design framework.

    PubMed

    Cervera-Padrell, Albert E; Skovby, Tommy; Kiil, Søren; Gani, Rafiqul; Gernaey, Krist V

    2012-10-01

    A systematic framework is proposed for the design of continuous pharmaceutical manufacturing processes. Specifically, the design framework focuses on organic chemistry based, active pharmaceutical ingredient (API) synthetic processes, but could potentially be extended to biocatalytic and fermentation-based products. The method exploits the synergic combination of continuous flow technologies (e.g., microfluidic techniques) and process systems engineering (PSE) methods and tools for faster process design and increased process understanding throughout the whole drug product and process development cycle. The design framework structures the many different and challenging design problems (e.g., solvent selection, reactor design, and design of separation and purification operations), driving the user from the initial drug discovery steps--where process knowledge is very limited--toward the detailed design and analysis. Examples from the literature of PSE methods and tools applied to pharmaceutical process design and novel pharmaceutical production technologies are provided along the text, assisting in the accumulation and interpretation of process knowledge. Different criteria are suggested for the selection of batch and continuous processes so that the whole design results in low capital and operational costs as well as low environmental footprint. The design framework has been applied to the retrofit of an existing batch-wise process used by H. Lundbeck A/S to produce an API: zuclopenthixol. Some of its batch operations were successfully converted into continuous mode, obtaining higher yields that allowed a significant simplification of the whole process. The material and environmental footprint of the process--evaluated through the process mass intensity index, that is, kg of material used per kg of product--was reduced to half of its initial value, with potential for further reduction. The case-study includes reaction steps typically used by the pharmaceutical industry featuring different characteristic reaction times, as well as L-L separation and distillation-based solvent exchange steps, and thus constitutes a good example of how the design framework can be useful to efficiently design novel or already existing API manufacturing processes taking advantage of continuous processes. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  8. Comparison of property between two Viking Seismic tapes

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Yamada, R.

    2016-12-01

    Tthe restoration work of the seismometer data onboard Viking Lander 2 is still continuing. Originally, the data were processed and archived both in MIT and UTIG separately, and each data is accessible via the Internet today. Their file formats to store the data are different, but both of them are currently readable due to the continuous investigation. However, there is some inconsistency between their data although most of their data are highly consistent. To understand the differences, the knowledge of archiving and off-line processing of spacecraft is required because these differences are caused by the off-line processing.The data processing of spacecraft often requires merge and sort processing of raw data. The merge processing is normally performed to eliminate duplicated data, and the sort processing is performed to fix data order. UTIG did not seem to perform these merge and sort processing. Therefore, the UTIG processed data remain duplication. The MIT processed data did these merge and sort processing, but the raw data sometimes include wrong time tags, and it cannot be fixed strictly after sort processing. Also, the MIT processed data has enough documents to understand metadata, while UTIG data has a brief instruction. Therefore, both of MIT and UTIG data are treated complementary. A better data set can be established using both of them. In this presentation, we would show the method to build a better data set of Viking Lander 2 seismic data.

  9. Holistic processing, contact, and the other-race effect in face recognition.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-12-01

    Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Process monitoring and visualization solutions for hot-melt extrusion: a review.

    PubMed

    Saerens, Lien; Vervaet, Chris; Remon, Jean Paul; De Beer, Thomas

    2014-02-01

    Hot-melt extrusion (HME) is applied as a continuous pharmaceutical manufacturing process for the production of a variety of dosage forms and formulations. To ensure the continuity of this process, the quality of the extrudates must be assessed continuously during manufacturing. The objective of this review is to provide an overview and evaluation of the available process analytical techniques which can be applied in hot-melt extrusion. Pharmaceutical extruders are equipped with traditional (univariate) process monitoring tools, observing barrel and die temperatures, throughput, screw speed, torque, drive amperage, melt pressure and melt temperature. The relevance of several spectroscopic process analytical techniques for monitoring and control of pharmaceutical HME has been explored recently. Nevertheless, many other sensors visualizing HME and measuring diverse critical product and process parameters with potential use in pharmaceutical extrusion are available, and were thoroughly studied in polymer extrusion. The implementation of process analytical tools in HME serves two purposes: (1) improving process understanding by monitoring and visualizing the material behaviour and (2) monitoring and analysing critical product and process parameters for process control, allowing to maintain a desired process state and guaranteeing the quality of the end product. This review is the first to provide an evaluation of the process analytical tools applied for pharmaceutical HME monitoring and control, and discusses techniques that have been used in polymer extrusion having potential for monitoring and control of pharmaceutical HME. © 2013 Royal Pharmaceutical Society.

  11. Process for improving metal production in steelmaking processes

    DOEpatents

    Pal, Uday B.; Gazula, Gopala K. M.; Hasham, Ali

    1996-01-01

    A process and apparatus for improving metal production in ironmaking and steelmaking processes is disclosed. The use of an inert metallic conductor in the slag containing crucible and the addition of a transition metal oxide to the slag are the disclosed process improvements.

  12. Materials processing in space: Early experiments

    NASA Technical Reports Server (NTRS)

    Naumann, R. J.; Herring, H. W.

    1980-01-01

    The characteristics of the space environment were reviewed. Potential applications of space processing are discussed and include metallurgical processing, and processing of semiconductor materials. The behavior of fluid in low gravity is described. The evolution of apparatus for materials processing in space was reviewed.

  13. Abhijit Dutta | NREL

    Science.gov Websites

    Techno-economic analysis Process model development for existing and conceptual processes Detailed heat integration Economic analysis of integrated processes Integration of process simulation learnings into control ;Conceptual Process Design and Techno-Economic Assessment of Ex Situ Catalytic Fast Pyrolysis of Biomass: A

  14. 7 CFR 52.806 - Color.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... PROCESSED FRUITS AND VEGETABLES, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS 1... cherries that vary markedly from this color due to oxidation, improper processing, or other causes, or that... to oxidation, improper processing, or other causes, or that are undercolored, does not exceed the...

  15. 7 CFR 52.806 - Color.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... PROCESSED FRUITS AND VEGETABLES, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS 1... cherries that vary markedly from this color due to oxidation, improper processing, or other causes, or that... to oxidation, improper processing, or other causes, or that are undercolored, does not exceed the...

  16. Meat Processing.

    ERIC Educational Resources Information Center

    Legacy, Jim; And Others

    This publication provides an introduction to meat processing for adult students in vocational and technical education programs. Organized in four chapters, the booklet provides a brief overview of the meat processing industry and the techniques of meat processing and butchering. The first chapter introduces the meat processing industry and…

  17. 40 CFR 60.2025 - What if my chemical recovery unit is not listed in § 60.2020(n)?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... materials that are recovered. (3) A description (including a process flow diagram) of the process in which... process. (4) A description (including a process flow diagram) of the chemical constituent recovery process...

  18. 40 CFR 60.2025 - What if my chemical recovery unit is not listed in § 60.2020(n)?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... materials that are recovered. (3) A description (including a process flow diagram) of the process in which... process. (4) A description (including a process flow diagram) of the chemical constituent recovery process...

  19. 40 CFR 60.2025 - What if my chemical recovery unit is not listed in § 60.2020(n)?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... materials that are recovered. (3) A description (including a process flow diagram) of the process in which... process. (4) A description (including a process flow diagram) of the chemical constituent recovery process...

  20. 40 CFR 60.2558 - What if a chemical recovery unit is not listed in § 60.2555(n)?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... materials that are recovered. (3) A description (including a process flow diagram) of the process in which... process. (4) A description (including a process flow diagram) of the chemical constituent recovery process...

  1. Integrated decontamination process for metals

    DOEpatents

    Snyder, Thomas S.; Whitlow, Graham A.

    1991-01-01

    An integrated process for decontamination of metals, particularly metals that are used in the nuclear energy industry contaminated with radioactive material. The process combines the processes of electrorefining and melt refining to purify metals that can be decontaminated using either electrorefining or melt refining processes.

  2. Case Studies in Continuous Process Improvement

    NASA Technical Reports Server (NTRS)

    Mehta, A.

    1997-01-01

    This study focuses on improving the SMT assembly process in a low-volume, high-reliability environment with emphasis on fine pitch and BGA packages. Before a process improvement is carried out, it is important to evaluate where the process stands in terms of process capability.

  3. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  4. International Best Practices for Pre-Processing and Co-Processing Municipal Solid Waste and Sewage Sludge in the Cement Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasanbeigi, Ali; Lu, Hongyou; Williams, Christopher

    The purpose of this report is to describe international best practices for pre-processing and coprocessing of MSW and sewage sludge in cement plants, for the benefit of countries that wish to develop co-processing capacity. The report is divided into three main sections. Section 2 describes the fundamentals of co-processing, Section 3 describes exemplary international regulatory and institutional frameworks for co-processing, and Section 4 describes international best practices related to the technological aspects of co-processing.

  5. Thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Chao, R. E.

    1974-01-01

    Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.

  6. Voyager image processing at the Image Processing Laboratory

    NASA Astrophysics Data System (ADS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-09-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  7. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  8. A novel process control method for a TT-300 E-Beam/X-Ray system

    NASA Astrophysics Data System (ADS)

    Mittendorfer, Josef; Gallnböck-Wagner, Bernhard

    2018-02-01

    This paper presents some aspects of the process control method for a TT-300 E-Beam/X-Ray system at Mediscan, Austria. The novelty of the approach is the seamless integration of routine monitoring dosimetry with process data. This allows to calculate a parametric dose for each production unit and consequently a fine grain and holistic process performance monitoring. Process performance is documented in process control charts for the analysis of individual runs as well as historic trending of runs of specific process categories over a specified time range.

  9. A minimally processed dietary pattern is associated with lower odds of metabolic syndrome among Lebanese adults.

    PubMed

    Nasreddine, Lara; Tamim, Hani; Itani, Leila; Nasrallah, Mona P; Isma'eel, Hussain; Nakhoul, Nancy F; Abou-Rizk, Joana; Naja, Farah

    2018-01-01

    To (i) estimate the consumption of minimally processed, processed and ultra-processed foods in a sample of Lebanese adults; (ii) explore patterns of intakes of these food groups; and (iii) investigate the association of the derived patterns with cardiometabolic risk. Cross-sectional survey. Data collection included dietary assessment using an FFQ and biochemical, anthropometric and blood pressure measurements. Food items were categorized into twenty-five groups based on the NOVA food classification. The contribution of each food group to total energy intake (TEI) was estimated. Patterns of intakes of these food groups were examined using exploratory factor analysis. Multivariate logistic regression analysis was used to evaluate the associations of derived patterns with cardiometabolic risk factors. Greater Beirut area, Lebanon. Adults ≥18 years (n 302) with no prior history of chronic diseases. Of TEI, 36·53 and 27·10 % were contributed by ultra-processed and minimally processed foods, respectively. Two dietary patterns were identified: the 'ultra-processed' and the 'minimally processed/processed'. The 'ultra-processed' consisted mainly of fast foods, snacks, meat, nuts, sweets and liquor, while the 'minimally processed/processed' consisted mostly of fruits, vegetables, legumes, breads, cheeses, sugar and fats. Participants in the highest quartile of the 'minimally processed/processed' pattern had significantly lower odds for metabolic syndrome (OR=0·18, 95 % CI 0·04, 0·77), hyperglycaemia (OR=0·25, 95 % CI 0·07, 0·98) and low HDL cholesterol (OR=0·17, 95 % CI 0·05, 0·60). The study findings may be used for the development of evidence-based interventions aimed at encouraging the consumption of minimally processed foods.

  10. Increasing patient safety and efficiency in transfusion therapy using formal process definitions.

    PubMed

    Henneman, Elizabeth A; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Andrzejewski, Chester; Merrigan, Karen; Cobleigh, Rachel; Frederick, Kimberly; Katz-Bassett, Ethan; Henneman, Philip L

    2007-01-01

    The administration of blood products is a common, resource-intensive, and potentially problem-prone area that may place patients at elevated risk in the clinical setting. Much of the emphasis in transfusion safety has been targeted toward quality control measures in laboratory settings where blood products are prepared for administration as well as in automation of certain laboratory processes. In contrast, the process of transfusing blood in the clinical setting (ie, at the point of care) has essentially remained unchanged over the past several decades. Many of the currently available methods for improving the quality and safety of blood transfusions in the clinical setting rely on informal process descriptions, such as flow charts and medical algorithms, to describe medical processes. These informal descriptions, although useful in presenting an overview of standard processes, can be ambiguous or incomplete. For example, they often describe only the standard process and leave out how to handle possible failures or exceptions. One alternative to these informal descriptions is to use formal process definitions, which can serve as the basis for a variety of analyses because these formal definitions offer precision in the representation of all possible ways that a process can be carried out in both standard and exceptional situations. Formal process definitions have not previously been used to describe and improve medical processes. The use of such formal definitions to prospectively identify potential error and improve the transfusion process has not previously been reported. The purpose of this article is to introduce the concept of formally defining processes and to describe how formal definitions of blood transfusion processes can be used to detect and correct transfusion process errors in ways not currently possible using existing quality improvement methods.

  11. Chemical interaction matrix between reagents in a Purex based process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brahman, R.K.; Hennessy, W.P.; Paviet-Hartmann, P.

    2008-07-01

    The United States Department of Energy (DOE) is the responsible entity for the disposal of the United States excess weapons grade plutonium. DOE selected a PUREX-based process to convert plutonium to low-enriched mixed oxide fuel for use in commercial nuclear power plants. To initiate this process in the United States, a Mixed Oxide (MOX) Fuel Fabrication Facility (MFFF) is under construction and will be operated by Shaw AREVA MOX Services at the Savannah River Site. This facility will be licensed and regulated by the U.S. Nuclear Regulatory Commission (NRC). A PUREX process, similar to the one used at La Hague,more » France, will purify plutonium feedstock through solvent extraction. MFFF employs two major process operations to manufacture MOX fuel assemblies: (1) the Aqueous Polishing (AP) process to remove gallium and other impurities from plutonium feedstock and (2) the MOX fuel fabrication process (MP), which processes the oxides into pellets and manufactures the MOX fuel assemblies. The AP process consists of three major steps, dissolution, purification, and conversion, and is the center of the primary chemical processing. A study of process hazards controls has been initiated that will provide knowledge and protection against the chemical risks associated from mixing of reagents over the life time of the process. This paper presents a comprehensive chemical interaction matrix evaluation for the reagents used in the PUREX-based process. Chemical interaction matrix supplements the process conditions by providing a checklist of any potential inadvertent chemical reactions that may take place. It also identifies the chemical compatibility/incompatibility of the reagents if mixed by failure of operations or equipment within the process itself or mixed inadvertently by a technician in the laboratories. (aut0010ho.« less

  12. Ultra-processed foods have the worst nutrient profile, yet they are the most available packaged products in a sample of New Zealand supermarkets.

    PubMed

    Luiten, Claire M; Steenhuis, Ingrid Hm; Eyles, Helen; Ni Mhurchu, Cliona; Waterlander, Wilma E

    2016-02-01

    To examine the availability of packaged food products in New Zealand supermarkets by level of industrial processing, nutrient profiling score (NPSC), price (energy, unit and serving costs) and brand variety. Secondary analysis of cross-sectional survey data on packaged supermarket food and non-alcoholic beverages. Products were classified according to level of industrial processing (minimally, culinary and ultra-processed) and their NPSC. Packaged foods available in four major supermarkets in Auckland, New Zealand. Packaged supermarket food products for the years 2011 and 2013. The majority (84% in 2011 and 83% in 2013) of packaged foods were classified as ultra-processed. A significant positive association was found between the level of industrial processing and NPSC, i.e., ultra-processed foods had a worse nutrient profile (NPSC=11.63) than culinary processed foods (NPSC=7.95), which in turn had a worse nutrient profile than minimally processed foods (NPSC=3.27), P<0.001. No clear associations were observed between the three price measures and level of processing. The study observed many variations of virtually the same product. The ten largest food manufacturers produced 35% of all packaged foods available. In New Zealand supermarkets, ultra-processed foods comprise the largest proportion of packaged foods and are less healthy than less processed foods. The lack of significant price difference between ultra- and less processed foods suggests ultra-processed foods might provide time-poor consumers with more value for money. These findings highlight the need to improve the supermarket food supply by reducing numbers of ultra-processed foods and by reformulating products to improve their nutritional profile.

  13. Trends in consumption of ultra-processed foods and obesity in Sweden between 1960 and 2010.

    PubMed

    Juul, Filippa; Hemmingsson, Erik

    2015-12-01

    To investigate how consumption of ultra-processed foods has changed in Sweden in relation to obesity. Nationwide ecological analysis of changes in processed foods along with corresponding changes in obesity. Trends in per capita food consumption during 1960-2010 were investigated using data from the Swedish Board of Agriculture. Food items were classified as group 1 (unprocessed/minimally processed), group 2 (processed culinary ingredients) or group 3 (3·1, processed food products; and 3·2, ultra-processed products). Obesity prevalence data were pooled from the peer-reviewed literature, Statistics Sweden and the WHO Global Health Observatory. Nationwide analysis in Sweden, 1960-2010. Swedish nationals aged 18 years and older. During the study period consumption of group 1 foods (minimal processing) decreased by 2 %, while consumption of group 2 foods (processed ingredients) decreased by 34 %. Consumption of group 3·1 foods (processed food products) increased by 116 % and group 3·2 foods (ultra-processed products) increased by 142 %. Among ultra-processed products, there were particularly large increases in soda (315 %; 22 v. 92 litres/capita per annum) and snack foods such as crisps and candies (367 %; 7 v. 34 kg/capita per annum). In parallel to these changes in ultra-processed products, rates of adult obesity increased from 5 % in 1980 to over 11 % in 2010. The consumption of ultra-processed products (i.e. foods with low nutritional value but high energy density) has increased dramatically in Sweden since 1960, which mirrors the increased prevalence of obesity. Future research should clarify the potential causal role of ultra-processed products in weight gain and obesity.

  14. Differential Phonological and Semantic Modulation of Neurophysiological Responses to Visual Word Recognition.

    PubMed

    Drakesmith, Mark; El-Deredy, Wael; Welbourne, Stephen

    2015-01-01

    Reading words for meaning relies on orthographic, phonological and semantic processing. The triangle model implicates a direct orthography-to-semantics pathway and a phonologically mediated orthography-to-semantics pathway, which interact with each other. The temporal evolution of processing in these routes is not well understood, although theoretical evidence predicts early phonological processing followed by interactive phonological and semantic processing. This study used electroencephalography-event-related potential (ERP) analysis and magnetoencephalography (MEG) source localisation to identify temporal markers and the corresponding neural generators of these processes in early (∼200 ms) and late (∼400 ms) neurophysiological responses to visual words, pseudowords and consonant strings. ERP showed an effect of phonology but not semantics in both time windows, although at ∼400 ms there was an effect of stimulus familiarity. Phonological processing at ~200 ms was localised to the left occipitotemporal cortex and the inferior frontal gyrus. At 400 ms, there was continued phonological processing in the inferior frontal gyrus and additional semantic processing in the anterior temporal cortex. There was also an area in the left temporoparietal junction which was implicated in both phonological and semantic processing. In ERP, the semantic response at ∼400 ms appeared to be masked by concurrent processes relating to familiarity, while MEG successfully differentiated these processes. The results support the prediction of early phonological processing followed by an interaction of phonological and semantic processing during word recognition. Neuroanatomical loci of these processes are consistent with previous neuropsychological and functional magnetic resonance imaging studies. The results also have implications for the classical interpretation of N400-like responses as markers for semantic processing.

  15. Basic abnormalities in visual processing affect face processing at an early age in autism spectrum disorder.

    PubMed

    Vlamings, Petra Hendrika Johanna Maria; Jonkman, Lisa Marthe; van Daalen, Emma; van der Gaag, Rutger Jan; Kemner, Chantal

    2010-12-15

    A detailed visual processing style has been noted in autism spectrum disorder (ASD); this contributes to problems in face processing and has been directly related to abnormal processing of spatial frequencies (SFs). Little is known about the early development of face processing in ASD and the relation with abnormal SF processing. We investigated whether young ASD children show abnormalities in low spatial frequency (LSF, global) and high spatial frequency (HSF, detailed) processing and explored whether these are crucially involved in the early development of face processing. Three- to 4-year-old children with ASD (n = 22) were compared with developmentally delayed children without ASD (n = 17). Spatial frequency processing was studied by recording visual evoked potentials from visual brain areas while children passively viewed gratings (HSF/LSF). In addition, children watched face stimuli with different expressions, filtered to include only HSF or LSF. Enhanced activity in visual brain areas was found in response to HSF versus LSF information in children with ASD, in contrast to control subjects. Furthermore, facial-expression processing was also primarily driven by detail in ASD. Enhanced visual processing of detailed (HSF) information is present early in ASD and occurs for neutral (gratings), as well as for socially relevant stimuli (facial expressions). These data indicate that there is a general abnormality in visual SF processing in early ASD and are in agreement with suggestions that a fast LSF subcortical face processing route might be affected in ASD. This could suggest that abnormal visual processing is causative in the development of social problems in ASD. Copyright © 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  16. [Contention on the theory of processing techniques of Chinese materia medica in the Ming-Qing period].

    PubMed

    Chen, Bin; Jia, Tianzhu

    2015-03-01

    On the basis of the golden stage of development of processing techniques of medicinals in the Song dynasty, the theory and techniques of processing in the Ming-Qing dynasties developed and accomplished further. The knowledge of some physicians on the processing of common medicinal, such as Radix rehmannia and Radixophiopogonis, was questioned, with new idea of processing methods put forward and argued against those insisting traditional ones, marking the progress of the art of processing. By reviewing the contention of technical theory of medicinal processing in the Ming-Qing period, useful references can be provided for the inheritance and development of the traditional art of processing medicinals.

  17. Process Feasibility Study in Support of Silicon Material, Task 1

    NASA Technical Reports Server (NTRS)

    Li, K. Y.; Hansen, K. C.; Yaws, C. L.

    1979-01-01

    During this reporting period, major activies were devoted to process system properties, chemical engineering and economic analyses. Analyses of process system properties was continued for materials involved in the alternate processes under consideration for solar cell grade silicon. The following property data are reported for silicon tetrafluoride: critical constants, vapor pressure, heat of varporization, heat capacity, density, surface tension, viscosity, thermal conductivity, heat of formation and Gibb's free energy of formation. Chemical engineering analysis of the BCL process was continued with primary efforts being devoted to the preliminary process design. Status and progress are reported for base case conditions; process flow diagram; reaction chemistry; material and energy balances; and major process equipment design.

  18. Technology and development requirements for advanced coal conversion systems

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A compendium of coal conversion process descriptions is presented. The SRS and MC data bases were utilized to provide information paticularly in the areas of existing process designs and process evaluations. Additional information requirements were established and arrangements were made to visit process developers, pilot plants, and process development units to obtain information that was not otherwise available. Plant designs, process descriptions and operating conditions, and performance characteristics were analyzed and requirements for further development identified and evaluated to determine the impact of these requirements on the process commercialization potential from the standpoint of economics and technical feasibility. A preliminary methodology was established for the comparative technical and economic assessment of advanced processes.

  19. The s-process in massive stars: the Shell C-burning contribution

    NASA Astrophysics Data System (ADS)

    Pignatari, Marco; Gallino, R.; Baldovin, C.; Wiescher, M.; Herwig, F.; Heger, A.; Heil, M.; Käppeler, F.

    In massive stars the s¡ process (slow neutron capture process) is activated at different tempera- tures, during He¡ burning and during convective shell C¡ burning. At solar metallicity, the neu- tron capture process in the convective C¡ shell adds a substantial contribution to the s¡ process yields made by the previous core He¡ burning, and the final results carry the signature of both processes. With decreasing metallicity, the contribution of the C¡ burning shell to the weak s¡ process rapidly decreases, because of the effect of the primary neutron poisons. On the other hand, also the s¡ process efficiency in the He core decreases with metallicity.

  20. Clean-up and disposal process of polluted sediments from urban rivers.

    PubMed

    He, P J; Shao, L M; Gu, G W; Bian, C L; Xu, C

    2001-10-01

    In this paper, the discussion is concentrated on the properties of the polluted sediments and the combination of clean-up and disposal process for the upper layer heavily polluted sediments with good flowability. Based on the systematic analyses of various clean-up processes, a suitable engineering process has been evaluated and recommended. The process has been applied to the river reclamation in Yangpu District of Shanghai City, China. An improved centrifuge is used for dewatering the dredged sludge, which plays an important role in the combination of clean-up and disposal process. The assessment of the engineering process shows its environmental and technical economy feasibility, which is much better than that of traditional dredging-disposal processes.

  1. Adopting software quality measures for healthcare processes.

    PubMed

    Yildiz, Ozkan; Demirörs, Onur

    2009-01-01

    In this study, we investigated the adoptability of software quality measures for healthcare process measurement. Quality measures of ISO/IEC 9126 are redefined from a process perspective to build a generic healthcare process quality measurement model. Case study research method is used, and the model is applied to a public hospital's Entry to Care process. After the application, weak and strong aspects of the process can be easily observed. Access audibility, fault removal, completeness of documentation, and machine utilization are weak aspects and these aspects are the candidates for process improvement. On the other hand, functional completeness, fault ratio, input validity checking, response time, and throughput time are the strong aspects of the process.

  2. Business process modeling in healthcare.

    PubMed

    Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd

    2012-01-01

    The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.

  3. Survey of the US materials processing and manufacturing in space program

    NASA Technical Reports Server (NTRS)

    Mckannan, E. C.

    1981-01-01

    To promote potential commercial applications of low-g technology, the materials processing and manufacturing in space program is structured to: (1) analyze the scientific principles of gravitational effects on processes used in producing materials; (2) apply the research toward the technology used to control production process (on Earth or in space, as appropriate); and (3) establish the legal and managerial framework for commercial ventures. Presently federally funded NASA research is described as well as agreements for privately funded commercial activity, and a proposed academic participation process. The future scope of the program and related capabilities using ground based facilities, aircraft, sounding rockets, and space shuttles are discussed. Areas of interest described include crystal growth; solidification of metals and alloys; containerless processing; fluids and chemical processes (including biological separation processes); and processing extraterrestrial materials.

  4. On the fractal characterization of Paretian Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Sokolov, Igor M.

    2012-06-01

    Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.

  5. Mobil process converts methanol to high-quality synthetic gasoline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, A.

    1978-12-11

    If production of gasoline from coal becomes commercially attractive in the United States, a process under development at the Mobil Research and Development Corp. may compete with better known coal liquefaction processes. Mobil process converts methanol to high-octane, unleaded gasoline; methanol can be produced commercially from coal. If gasoline is the desired product, the Mobil process offers strong technical and cost advantages over H-coal, Exxon donor solvent, solvent-refined coal, and Fischer--Tropsch processes. The cost analysis, contained in a report to the Dept. of Energy, concludes that the Mobil process produces more-expensive liquid products than any other liquefaction process except Fischer--Tropsch.more » But Mobil's process produces ready-to-use gasoline, while the others produce oils which require further expensive refining to yield gasoline. Disadvantages and advantages are discussed.« less

  6. Using Waste Heat for External Processes (English/Chinese) (Fact Sheet) (in Chin3se; English)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Chinese translation of the Using Waste Heat for External Processes fact sheet. Provides suggestions on how to use waste heat in industrial applications. The temperature of exhaust gases from fuel-fired industrial processes depends mainly on the process temperature and the waste heat recovery method. Figure 1 shows the heat lost in exhaust gases at various exhaust gas temperatures and percentages of excess air. Energy from gases exhausted from higher temperature processes (primary processes) can be recovered and used for lower temperature processes (secondary processes). One example is to generate steam using waste heat boilers for the fluid heaters used inmore » petroleum crude processing. In addition, many companies install heat exchangers on the exhaust stacks of furnaces and ovens to produce hot water or to generate hot air for space heating.« less

  7. In-situ acoustic signature monitoring in additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Koester, Lucas W.; Taheri, Hossein; Bigelow, Timothy A.; Bond, Leonard J.; Faierson, Eric J.

    2018-04-01

    Additive manufacturing is a rapidly maturing process for the production of complex metallic, ceramic, polymeric, and composite components. The processes used are numerous, and with the complex geometries involved this can make quality control and standardization of the process and inspection difficult. Acoustic emission measurements have been used previously to monitor a number of processes including machining and welding. The authors have identified acoustic signature measurement as a potential means of monitoring metal additive manufacturing processes using process noise characteristics and those discrete acoustic emission events characteristic of defect growth, including cracks and delamination. Results of acoustic monitoring for a metal additive manufacturing process (directed energy deposition) are reported. The work investigated correlations between acoustic emissions and process noise with variations in machine state and deposition parameters, and provided proof of concept data that such correlations do exist.

  8. NPTool: Towards Scalability and Reliability of Business Process Management

    NASA Astrophysics Data System (ADS)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  9. A new intuitionistic fuzzy rule-based decision-making system for an operating system process scheduler.

    PubMed

    Butt, Muhammad Arif; Akram, Muhammad

    2016-01-01

    We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.

  10. Achieving continuous manufacturing for final dosage formation: challenges and how to meet them. May 20-21, 2014 Continuous Manufacturing Symposium.

    PubMed

    Byrn, Stephen; Futran, Maricio; Thomas, Hayden; Jayjock, Eric; Maron, Nicola; Meyer, Robert F; Myerson, Allan S; Thien, Michael P; Trout, Bernhardt L

    2015-03-01

    We describe the key issues and possibilities for continuous final dosage formation, otherwise known as downstream processing or drug product manufacturing. A distinction is made between heterogeneous processing and homogeneous processing, the latter of which is expected to add more value to continuous manufacturing. We also give the key motivations for moving to continuous manufacturing, some of the exciting new technologies, and the barriers to implementation of continuous manufacturing. Continuous processing of heterogeneous blends is the natural first step in converting existing batch processes to continuous. In heterogeneous processing, there are discrete particles that can segregate, versus in homogeneous processing, components are blended and homogenized such that they do not segregate. Heterogeneous processing can incorporate technologies that are closer to existing technologies, where homogeneous processing necessitates the development and incorporation of new technologies. Homogeneous processing has the greatest potential for reaping the full rewards of continuous manufacturing, but it takes long-term vision and a more significant change in process development than heterogeneous processing. Heterogeneous processing has the detriment that, as the technologies are adopted rather than developed, there is a strong tendency to incorporate correction steps, what we call below "The Rube Goldberg Problem." Thus, although heterogeneous processing will likely play a major role in the near-term transformation of heterogeneous to continuous processing, it is expected that homogeneous processing is the next step that will follow. Specific action items for industry leaders are: Form precompetitive partnerships, including industry (pharmaceutical companies and equipment manufacturers), government, and universities. These precompetitive partnerships would develop case studies of continuous manufacturing and ideally perform joint-technology development, including development of small-scale equipment and processes. Develop ways to invest internally in continuous manufacturing. How best to do this will depend on the specifics of a given organization, in particular the current development projects. Upper managers will need to energize their process developers to incorporate continuous manufacturing in at least part of their processes to gain experience and demonstrate directly the benefits. Training of continuous manufacturing technologies, organizational approaches, and regulatory approaches is a key area that industrial leaders should pursue together. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  11. 25 CFR 42.4 - What are alternative dispute resolution processes?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... What are alternative dispute resolution processes? Alternative dispute resolution (ADR) processes are... action. (a) ADR processes may: (1) Include peer adjudication, mediation, and conciliation; and (2... that these practices are readily identifiable. (b) For further information on ADR processes and how to...

  12. 25 CFR 42.4 - What are alternative dispute resolution processes?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... What are alternative dispute resolution processes? Alternative dispute resolution (ADR) processes are... action. (a) ADR processes may: (1) Include peer adjudication, mediation, and conciliation; and (2... that these practices are readily identifiable. (b) For further information on ADR processes and how to...

  13. Characterization of Nonhomogeneous Poisson Processes Via Moment Conditions.

    DTIC Science & Technology

    1986-08-01

    Poisson processes play an important role in many fields. The Poisson process is one of the simplest counting processes and is a building block for...place of independent increments. This provides a somewhat different viewpoint for examining Poisson processes . In addition, new characterizations for

  14. West Valley demonstration project: Alternative processes for solidifying the high-level wastes

    NASA Astrophysics Data System (ADS)

    Holton, L. K.; Larson, D. E.; Partain, W. L.; Treat, R. L.

    1981-10-01

    Two pretreatment approaches and several waste form processes for radioactive wastes were selected for evaluation. The two waste treatment approaches were the salt/sludge separation process and the combined waste process. Both terminal and interim waste form processes were studied.

  15. Process for improving metal production in steelmaking processes

    DOEpatents

    Pal, U.B.; Gazula, G.K.M.; Hasham, A.

    1996-06-18

    A process and apparatus for improving metal production in ironmaking and steelmaking processes is disclosed. The use of an inert metallic conductor in the slag containing crucible and the addition of a transition metal oxide to the slag are the disclosed process improvements. 6 figs.

  16. 40 CFR 409.10 - Applicability; description of the beet sugar processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sugar processing subcategory. 409.10 Section 409.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.10 Applicability; description of the beet sugar processing subcategory. The...

  17. 40 CFR 409.10 - Applicability; description of the beet sugar processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sugar processing subcategory. 409.10 Section 409.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.10 Applicability; description of the beet sugar processing subcategory. The...

  18. Enhancing Manufacturing Process Education via Computer Simulation and Visualization

    ERIC Educational Resources Information Center

    Manohar, Priyadarshan A.; Acharya, Sushil; Wu, Peter

    2014-01-01

    Industrially significant metal manufacturing processes such as melting, casting, rolling, forging, machining, and forming are multi-stage, complex processes that are labor, time, and capital intensive. Academic research develops mathematical modeling of these processes that provide a theoretical framework for understanding the process variables…

  19. 9 CFR 318.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... factor over the specified thermal processing operation times. Temperature/time recording devices shall... minimum initial temperatures and operating procedures for thermal processing equipment, shall be posted in... available to the thermal processing system operator and the inspector. (b) Process indicators and retort...

  20. 9 CFR 318.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... factor over the specified thermal processing operation times. Temperature/time recording devices shall... minimum initial temperatures and operating procedures for thermal processing equipment, shall be posted in... available to the thermal processing system operator and the inspector. (b) Process indicators and retort...

Top