Science.gov

Sample records for experiment model fresh

  1. Direct-contact condensers for open-cycle OTEC applications: Model validation with fresh water experiments for structured packings

    NASA Astrophysics Data System (ADS)

    Bharathan, D.; Parsons, B. K.; Althof, J. A.

    1988-10-01

    The objective of the reported work was to develop analytical methods for evaluating the design and performance of advanced high-performance heat exchangers for use in open-cycle thermal energy conversion (OC-OTEC) systems. This report describes the progress made on validating a one-dimensional, steady-state analytical computer of fresh water experiments. The condenser model represents the state of the art in direct-contact heat exchange for condensation for OC-OTEC applications. This is expected to provide a basis for optimizing OC-OTEC plant configurations. Using the model, we examined two condenser geometries, a cocurrent and a countercurrent configuration. This report provides detailed validation results for important condenser parameters for cocurrent and countercurrent flows. Based on the comparisons and uncertainty overlap between the experimental data and predictions, the model is shown to predict critical condenser performance parameters with an uncertainty acceptable for general engineering design and performance evaluations.

  2. Direct-contact condensers for open-cycle OTEC applications: Model validation with fresh water experiments for structured packings

    SciTech Connect

    Bharathan, D.; Parsons, B.K.; Althof, J.A.

    1988-10-01

    The objective of the reported work was to develop analytical methods for evaluating the design and performance of advanced high-performance heat exchangers for use in open-cycle thermal energy conversion (OC-OTEC) systems. This report describes the progress made on validating a one-dimensional, steady-state analytical computer of fresh water experiments. The condenser model represents the state of the art in direct-contact heat exchange for condensation for OC-OTEC applications. This is expected to provide a basis for optimizing OC-OTEC plant configurations. Using the model, we examined two condenser geometries, a cocurrent and a countercurrent configuration. This report provides detailed validation results for important condenser parameters for cocurrent and countercurrent flows. Based on the comparisons and uncertainty overlap between the experimental data and predictions, the model is shown to predict critical condenser performance parameters with an uncertainty acceptable for general engineering design and performance evaluations. 33 refs., 69 figs., 38 tabs.

  3. Global modeling of fresh surface water temperature

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Eikelboom, T.; van Vliet, M. T.; Van Beek, L. P.

    2011-12-01

    Temperature determines a range of water physical properties, the solubility of oxygen and other gases and acts as a strong control on fresh water biogeochemistry, influencing chemical reaction rates, phytoplankton and zooplankton composition and the presence or absence of pathogens. Thus, in freshwater ecosystems the thermal regime affects the geographical distribution of aquatic species through their growth and metabolism, tolerance to parasites, diseases and pollution and life history. Compared to statistical approaches, physically-based models of surface water temperature have the advantage that they are robust in light of changes in flow regime, river morphology, radiation balance and upstream hydrology. Such models are therefore better suited for projecting the effects of global change on water temperature. Till now, physically-based models have only been applied to well-defined fresh water bodies of limited size (e.g., lakes or stream segments), where the numerous parameters can be measured or otherwise established, whereas attempts to model water temperature over larger scales has thus far been limited to regression type of models. Here, we present a first attempt to apply a physically-based model of global fresh surface water temperature. The model adds a surface water energy balance to river discharge modelled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by short and long-wave radiation and sensible and latent heat fluxes. Also included are ice-formation and its effect on heat storage and river hydraulics. We used the coupled surface water and energy balance model to simulate global fresh surface water temperature at daily time steps on a 0.5x0.5 degree grid for the period 1970-2000. Meteorological forcing was obtained from the CRU data set, downscaled to daily values with ECMWF

  4. Assessment of the recycling potential of fresh concrete waste using a factorial design of experiments.

    PubMed

    Correia, S L; Souza, F L; Dienstmann, G; Segadães, A M

    2009-11-01

    Recycling of industrial wastes and by-products can help reduce the cost of waste treatment prior to disposal and eventually preserve natural resources and energy. To assess the recycling potential of a given waste, it is important to select a tool capable of giving clear indications either way, with the least time and work consumption, as is the case of modelling the system properties using the results obtained from statistical design of experiments. In this work, the aggregate reclaimed from the mud that results from washout and cleaning operations of fresh concrete mixer trucks (fresh concrete waste, FCW) was recycled into new concrete with various water/cement ratios, as replacement of natural fine aggregates. A 3(2) factorial design of experiments was used to model fresh concrete consistency index and hardened concrete water absorption and 7- and 28-day compressive strength, as functions of FCW content and water/cement ratio, and the resulting regression equations and contour plots were validated with confirmation experiments. The results showed that the fresh concrete workability worsened with the increase in FCW content but the water absorption (5-10 wt.%), 7-day compressive strength (26-36 MPa) and 28-day compressive strength (32-44 MPa) remained within the specified ranges, thus demonstrating that the aggregate reclaimed from FCW can be recycled into new concrete mixtures with lower natural aggregate content. PMID:19596189

  5. Osmotic Power: A Fresh Look at an Old Experiment

    ERIC Educational Resources Information Center

    Dugdale, Pam

    2014-01-01

    Electricity from osmotic pressure might seem a far-fetched idea but this article describes a prototype in Norway where the osmotic pressure generated between salt and fresh water drives a turbine. This idea was applied in a student investigation, where they were tasked with researching which alternative materials could be used for the…

  6. Analysis of Fresh Fuel Critical Experiments Appropriate for Burnup Credit Validation

    SciTech Connect

    DeHart, M.D.

    1995-01-01

    The ANS/ANS-8.1 standard requires that calculational methods used in determining criticality safety limits for applications outside reactors be validated by comparison with appropriate critical experiments. This report provides a detailed description of 34 fresh fuel critical experiments and their analyses using the SCALE-4.2 code system and the 27-group ENDF/B-IV cross-section library. The 34 critical experiments were selected based on geometry, material, and neutron interaction characteristics that are applicable to a transportation cask loaded with pressurized-water-reactor spent fuel. These 34 experiments are a representative subset of a much larger data base of low-enriched uranium and mixed-oxide critical experiments. A statistical approach is described and used to obtain an estimate of the bias and uncertainty in the calculational methods and to predict a confidence limit for a calculated neutron multiplication factor. The SCALE-4.2 results for a superset of approximately 100 criticals are included in uncertainty analyses, but descriptions of the individual criticals are not included.

  7. Numerical Modelling and Ejecta Distribution Analysis of a Martian Fresh Crater

    NASA Astrophysics Data System (ADS)

    Lucchetti, A.; Cremonese, G.; Cambianica, P.; Daubar, I.; McEwen, A. S.; Re, C.

    2015-12-01

    Images taken by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter reveal fresh craters on Mars that are known to be recent as they are constrained by before and after images (Daubar et al., 2013). In particular, on Nov. 19, 2013 an image acquired by HiRISE, ESP_034285_1835, observed a 25 m diameter fresh crater located at 3.7° N, 53.4° E. This impact occurred between July 2010 and May 2012, as constrained by Context camera (CTX) images. Because the terrain where the crater formed is dusty, the fresh crater appears blue in the enhanced color of the HiRISE image, due to removal of the reddish dust in that area. We analyze this crater using the iSALE shock physics code (Amsden et al., 1980, Collins et al., 2004, Ivanov et al., 1997, Melosh et al., 1992, Wünnemann et al., 2006) to model the formation of this impact structure which is ~25 m in diameter and ~ 2.5 - 3 m in depth. These values are obtained from the DTM profile we have generated. We model the Martian surface considering different target compositions as regolith and fractured basalt rock and we based our simulations on a basalt projectile with a porosity of 10% (which is derived from the average of the meteorite types proposed by Britt et al., 2002) that hits the Martian surface with a beginning velocity equal to 7 km/s (Le Feuvre & Wieczorek, 2011) and an impact angle of 90°. The projectile size is around 1 m and it is estimated from the comparison between the DTM profile and the profiles obtained by numerical modelling. The primary objective of this analysis is the detailed study of the ejecta, in fact we will track the ejecta coming from the simulation and compare them to the ejecta distribution computed on the image (the ejecta reached a distance of more than 15 km). From the matching of the simulated ejecta with their real distribution, we will be able to understand the goodness of the simulation and also put constraints on the target material.

  8. Modeling fresh water lens damage and recovery on atolls after storm-wave washover.

    PubMed

    Chui, Ting Fong May; Terry, James P

    2012-01-01

    The principal natural source of fresh water on scattered coral atolls throughout the tropical Pacific Ocean is thin unconfined groundwater lenses within islet substrates. Although there are many threats to the viability of atoll fresh water lenses, salinization caused by large storm waves washing over individual atoll islets is poorly understood. In this study, a mathematical modeling approach is used to examine the immediate responses, longer-term behavior, and subsequent (partial) recovery of a Pacific atoll fresh water lens after saline damage caused by cyclone-generated wave washover under different scenarios. Important findings include: (1) the saline plume formed by a washover event mostly migrates downward first through the top coral sand and gravel substrate, but then exits the aquifer to the ocean laterally through the more permeable basement limestone; (2) a lower water table position before the washover event, rather than a longer duration of storm washover, causes more severe damage to the fresh water lens; (3) relatively fresher water can possibly be found as a preserved horizon in the deeper part of an aquifer after disturbance, especially if the fresh water lens extends into the limestone under normal conditions; (4) post-cyclone accumulation of sea water in the central depression (swamp) of an atoll islet prolongs the later stage of fresh water lens recovery. PMID:21883195

  9. Modeling fresh water lens damage and recovery on atolls after storm-wave washover.

    PubMed

    Chui, Ting Fong May; Terry, James P

    2012-01-01

    The principal natural source of fresh water on scattered coral atolls throughout the tropical Pacific Ocean is thin unconfined groundwater lenses within islet substrates. Although there are many threats to the viability of atoll fresh water lenses, salinization caused by large storm waves washing over individual atoll islets is poorly understood. In this study, a mathematical modeling approach is used to examine the immediate responses, longer-term behavior, and subsequent (partial) recovery of a Pacific atoll fresh water lens after saline damage caused by cyclone-generated wave washover under different scenarios. Important findings include: (1) the saline plume formed by a washover event mostly migrates downward first through the top coral sand and gravel substrate, but then exits the aquifer to the ocean laterally through the more permeable basement limestone; (2) a lower water table position before the washover event, rather than a longer duration of storm washover, causes more severe damage to the fresh water lens; (3) relatively fresher water can possibly be found as a preserved horizon in the deeper part of an aquifer after disturbance, especially if the fresh water lens extends into the limestone under normal conditions; (4) post-cyclone accumulation of sea water in the central depression (swamp) of an atoll islet prolongs the later stage of fresh water lens recovery.

  10. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  11. Numerical modeling of fresh concrete flow through porous medium

    NASA Astrophysics Data System (ADS)

    Kolařík, F.; Patzák, B.; Zeman, J.

    2016-06-01

    The paper focuses on a numerical modeling of a non-Newtonian fluid flow in a porous domain. It presents combination of a homogenization approach to obtain permeability from the underlying micro-structure with coupling of a Stokes and Darcy flow through the interface on the macro level. As a numerical method we employed the Finite Element method. The results obtained from the homogenization approach are validated against fully resolved solution computed by direct numerical simulation.

  12. Evaluation of hands-on seminar for reduced port surgery using fresh porcine cadaver model

    PubMed Central

    Poudel, Saseem; Kurashima, Yo; Shichinohe, Toshiaki; Kitashiro, Shuji; Kanehira, Eiji; Hirano, Satoshi

    2016-01-01

    BACKGROUND: The use of various biological and non-biological simulators is playing an important role in training modern surgeons with laparoscopic skills. However, there have been few reports of the use of a fresh porcine cadaver model for training in laparoscopic surgical skills. The purpose of this study was to report on a surgical training seminar on reduced port surgery using a fresh cadaver porcine model and to assess its feasibility and efficacy. MATERIALS AND METHODS: The hands-on seminar had 10 fresh porcine cadaver models and two dry boxes. Each table was provided with a unique access port and devices used in reduced port surgery. Each group of 2 surgeons spent 30 min at each station, performing different tasks assisted by the instructor. The questionnaire survey was done immediately after the seminar and 8 months after the seminar. RESULTS: All the tasks were completed as planned. Both instructors and participants were highly satisfied with the seminar. There was a concern about the time allocated for the seminar. In the post-seminar survey, the participants felt that the number of reduced port surgeries performed by them had increased. CONCLUSION: The fresh cadaver porcine model requires no special animal facility and can be used for training in laparoscopic procedures. PMID:27279391

  13. Hydrochemical Impacts of CO2 Leakage on Fresh Groundwater: a Field Scale Experiment

    NASA Astrophysics Data System (ADS)

    Lions, J.; Gal, F.; Gombert, P.; Lafortune, S.; Darmoul, Y.; Prevot, F.; Grellier, S.; Squarcioni, P.

    2013-12-01

    One of the questions related to the emerging technology for Carbon Geological Storage concerns the risk of CO2 migration beyond the geological storage formation. In the event of leakage toward the surface, the CO2 might affect resources in neighbouring formations (geothermal or mineral resources, groundwater) or even represent a hazard for human activities at the surface or in the subsurface. In view of the preservation of the groundwater resources mainly for human consumption, this project studies the potential hydrogeochemical impacts of CO2 leakage on fresh groundwater quality. One of the objectives is to characterize the bio-geochemical mechanisms that may impair the quality of fresh groundwater resources in case of CO2 leakage. To reach the above mentioned objectives, this project proposes a field experiment to characterize in situ the mechanisms that could impact the water quality, the CO2-water-rock interactions and also to improve the monitoring methodology by controlled CO2 leakage in shallow aquifer. The tests were carried out in an experimental site in the chalk formation of the Paris Basin. The site is equipped with an appropriate instrumentation and was previously characterized (8 piezometers, 25 m deep and 4 piezairs 11 m deep). The injection test was preceded by 6 months of monitoring in order to characterize hydrodynamics and geochemical baselines of the site (groundwater, vadose and soil). Leakage into groundwater is simulated via the injection of a small quantity of food-grade CO2 (~20 kg dissolved in 10 m3 of water) in the injection well at a depth of about 20 m. A plume of dissolved CO2 is formed and moves downward according to the direction of groundwater flow and probably by degassing in part to the surface. During the injection test, hydrochemical monitoring of the aquifer is done in situ and by sampling. The parameters monitored in the groundwater are the piezometric head, temperature, pH and electrical conductivity. Analysis on water

  14. Measured and Modeled Humidification Factors of Fresh Smoke Particles From Biomass Burning: Role of Inorganic Constituents

    SciTech Connect

    Hand, Jenny L.; Day, Derek E.; McMeeking, Gavin M.; Levin, Ezra; Carrico, Christian M.; Kreidenweis, Sonia M.; Malm, William C.; Laskin, Alexander; Desyaterik, Yury

    2010-07-09

    During the 2006 FLAME study (Fire Laboratory at Missoula Experiment), laboratory burns of biomass fuels were performed to investigate the physico-chemical, optical, and hygroscopic properties of fresh biomass smoke. As part of the experiment, two nephelometers simultaneously measured dry and humidified light scattering coefficients (bsp(dry) and bsp(RH), respectively) in order to explore the role of relative humidity (RH) on the optical properties of biomass smoke aerosols. Results from burns of several biomass fuels showed large variability in the humidification factor (f(RH) = bsp(RH)/bsp(dry)). Values of f(RH) at RH=85-90% ranged from 1.02 to 2.15 depending on fuel type. We incorporated measured chemical composition and size distribution data to model the smoke hygroscopic growth to investigate the role of inorganic and organic compounds on water uptake for these aerosols. By assuming only inorganic constituents were hygroscopic, we were able to model the water uptake within experimental uncertainty, suggesting that inorganic species were responsible for most of the hygroscopic growth. In addition, humidification factors at 85-90% RH increased for smoke with increasing inorganic salt to carbon ratios. Particle morphology as observed from scanning electron microscopy revealed that samples of hygroscopic particles contained soot chains either internally or externally mixed with inorganic potassium salts, while samples of weak to non-hygroscopic particles were dominated by soot and organic constituents. This study provides further understanding of the compounds responsible for water uptake by young biomass smoke, and is important for accurately assessing the role of smoke in climate change studies and visibility regulatory efforts.

  15. Measured and modeled humidification factors of fresh smoke particles from biomass burning: role of inorganic constituents

    NASA Astrophysics Data System (ADS)

    Hand, J. L.; Day, D. E.; McMeeking, G. M.; Levin, E. J. T.; Carrico, C. M.; Kreidenweis, S. M.; Malm, W. C.; Laskin, A.; Desyaterik, Y.

    2010-02-01

    During the 2006 FLAME study (Fire Laboratory at Missoula Experiment), laboratory burns of biomass fuels were performed to investigate the physico-chemical, optical, and hygroscopic properties of fresh biomass smoke. As part of the experiment, two nephelometers simultaneously measured dry and humidified light scattering coefficients (bsp(dry) and bsp(RH), respectively) in order to explore the role of relative humidity (RH) on the optical properties of biomass smoke aerosols. Results from burns of several biomass fuels showed large variability in the humidification factor (f(RH)=bsp(RH)/bsp(dry)). Values of f(RH) at RH=85-90% ranged from 1.02 to 2.15 depending on fuel type. We incorporated measured chemical composition and size distribution data to model the smoke hygroscopic growth to investigate the role of inorganic and organic compounds on water uptake for these aerosols. By assuming only inorganic constituents were hygroscopic, we were able to model the water uptake within experimental uncertainty, suggesting that inorganic species were responsible for most of the hygroscopic growth. In addition, humidification factors at 85-90% RH increased for smoke with increasing inorganic salt to carbon ratios. Particle morphology as observed from scanning electron microscopy revealed that samples of hygroscopic particles contained soot chains either internally or externally mixed with inorganic potassium salts, while samples of weak to non-hygroscopic particles were dominated by soot and organic constituents. This study provides further understanding of the compounds responsible for water uptake by young biomass smoke, and is important for accurately assessing the role of smoke in climate change studies and visibility regulatory efforts.

  16. Measured and modeled humidification factors of fresh smoke particles from biomass burning: role of inorganic constituents

    NASA Astrophysics Data System (ADS)

    Hand, J. L.; Day, D. E.; McMeeking, G. M.; Levin, E. J. T.; Carrico, C. M.; Kreidenweis, S. M.; Malm, W. C.; Laskin, A.; Desyaterik, Y.

    2010-07-01

    During the 2006 FLAME study (Fire Laboratory at Missoula Experiment), laboratory burns of biomass fuels were performed to investigate the physico-chemical, optical, and hygroscopic properties of fresh biomass smoke. As part of the experiment, two nephelometers simultaneously measured dry and humidified light scattering coefficients (bsp(dry) and bsp(RH), respectively) in order to explore the role of relative humidity (RH) on the optical properties of biomass smoke aerosols. Results from burns of several biomass fuels from the west and southeast United States showed large variability in the humidification factor (f(RH)=bsp(RH)/bsp(dry)). Values of f(RH) at RH=80-85% ranged from 0.99 to 1.81 depending on fuel type. We incorporated measured chemical composition and size distribution data to model the smoke hygroscopic growth to investigate the role of inorganic compounds on water uptake for these aerosols. By assuming only inorganic constituents were hygroscopic, we were able to model the water uptake within experimental uncertainty, suggesting that inorganic species were responsible for most of the hygroscopic growth. In addition, humidification factors at 80-85% RH increased for smoke with increasing inorganic salt to carbon ratios. Particle morphology as observed from scanning electron microscopy revealed that samples of hygroscopic particles contained soot chains either internally or externally mixed with inorganic potassium salts, while samples of weak to non-hygroscopic particles were dominated by soot and organic constituents. This study provides further understanding of the compounds responsible for water uptake by young biomass smoke, and is important for accurately assessing the role of smoke in climate change studies and visibility regulatory efforts.

  17. A novel reflectance-based model for evaluating chlorophyll concentrations of fresh and water-stressed leaves

    NASA Astrophysics Data System (ADS)

    Lin, C.; Popescu, S. C.; Huang, S. C.; Chang, P. T.; Wen, H. L.

    2015-01-01

    Water deficits can cause chlorophyll degradation which decreases the total concentration of chlorophyll a and b (Chls). Few studies have investigated the effectiveness of spectral indices under water-stressed conditions. Chlorophyll meters have been extensively used for a wide variety of leaf chlorophyll and nitrogen estimations. Since a chlorophyll meter works by sensing leaves absorptance and transmittance, the reading of chlorophyll concentration will be affected by changes in transmittance as if there were a water deficit in the leaves. The overall objective of this paper was to develop a novel and reliable reflectance-based model for estimating Chls of fresh and water-stressed leaves using the reflectance at the absorption bands of chlorophyll a and b and the red edge spectrum. Three independent experiments were designed to collect data from three leaf sample sets for the construction and validation of Chls estimation models. First, a reflectance experiment was conducted to collect foliar Chls and reflectance of leaves with varying water stress using the ASD FieldSpec spectroradiometer. Second, a chlorophyll meter (SPAD-502) experiment was carried out to collect foliar Chls and meter readings. These two data sets were separately used for developing reflectance-based or absorptance-based Chls estimation models using linear and nonlinear regression analysis. Suitable models were suggested mainly based on the coefficient of determination (R2). Finally, an experiment was conducted to collect the third data set for the validation of Chls models using the root mean squared error (RMSE) and the mean absolute error (MAE). In all of the experiments, the observations (real values) of the foliar Chls were extracted from acetone solution and determined by using a Hitachi U-2000 spectrophotometer. The spectral indices in the form of reflectance ratio/difference/slope derived from the Chl b absorption bands (ρ645 and ρ455) provided Chls estimates with RMSE around 0

  18. A Technique to Perfuse Cadavers that Extends the Useful Life of Fresh Tissues: The Duke Experience

    ERIC Educational Resources Information Center

    Messmer, Caroline; Kellogg, Ryan T.; Zhang, Yixin; Baiak, Andresa; Leiweke, Clinton; Marcus, Jeffrey R.; Levin, L. Scott; Zenn, Michael R.; Erdmann, Detlev

    2010-01-01

    The demand for laboratory-based teaching and training is increasing worldwide as medical training and education confront the pressures of shorter training time and rising costs. This article presents a cost-effective perfusion technique that extends the useful life of fresh tissue. Refrigerated cadavers are preserved in their natural state for up…

  19. Philip Morris toxicological experiments with fresh sidestream smoke: more toxic than mainstream smoke

    PubMed Central

    Schick, S; Glantz, S

    2005-01-01

    Background: Exposure to secondhand smoke causes lung cancer; however, there are little data in the open literature on the in vivo toxicology of fresh sidestream cigarette smoke to guide the debate about smoke-free workplaces and public places. Objective: To investigate the unpublished in vivo research on sidestream cigarette smoke done by Philip Morris Tobacco Company during the 1980s at its Institut für Biologische Forschung (INBIFO). Methods: Analysis of internal tobacco industry documents now available at the University of California San Francisco Legacy Tobacco Documents Library and other websites. Results: Inhaled fresh sidestream cigarette smoke is approximately four times more toxic per gram total particulate matter (TPM) than mainstream cigarette smoke. Sidestream condensate is approximately three times more toxic per gram and two to six times more tumourigenic per gram than mainstream condensate by dermal application. The gas/vapour phase of sidestream smoke is responsible for most of the sensory irritation and respiratory tract epithelium damage. Fresh sidestream smoke inhibits normal weight gain in developing animals. In a 21day exposure, fresh sidestream smoke can cause damage to the respiratory epithelium at concentrations of 2 µg/l TPM. Damage to the respiratory epithelium increases with longer exposures. The toxicity of whole sidestream smoke is higher than the sum of the toxicities of its major constituents. Conclusion: Fresh sidestream smoke at concentrations commonly encountered indoors is well above a 2 µg/m3 reference concentration (the level at which acute effects are unlikely to occur), calculated from the results of the INBIFO studies, that defines acute toxicity to humans. Smoke-free public places and workplaces are the only practical way to protect the public health from the toxins in sidestream smoke. PMID:16319363

  20. Use of a miniature laboratory fresh cheese model for investigating antimicrobial activities.

    PubMed

    Van Tassell, M L; Ibarra-Sánchez, L A; Takhar, S R; Amaya-Llano, S L; Miller, M J

    2015-12-01

    Hispanic-style fresh cheeses, such as queso fresco, have relatively low salt content, high water activity, and near neutral pH, which predisposes them to growth of Listeria monocytogenes. Biosafety constraints limit the incorporation of L. monocytogenes into cheeses manufactured via traditional methods in challenge studies, so few have focused on in situ testing of novel antimicrobials in fresh cheeses. We have developed a modular, miniaturized laboratory-scale queso fresco model for testing the incorporation of novel antilisterials. We have demonstrated the assessment of the antilisterials nisin and ferulic acid, alone and in combination, at various levels. Our results support the inhibitory effects of ferulic acid in cheese, against both L. monocytogenes and its common surrogate Listeria innocua, and we provide preliminary evaluation of its consumer acceptability.

  1. Use of a miniature laboratory fresh cheese model for investigating antimicrobial activities.

    PubMed

    Van Tassell, M L; Ibarra-Sánchez, L A; Takhar, S R; Amaya-Llano, S L; Miller, M J

    2015-12-01

    Hispanic-style fresh cheeses, such as queso fresco, have relatively low salt content, high water activity, and near neutral pH, which predisposes them to growth of Listeria monocytogenes. Biosafety constraints limit the incorporation of L. monocytogenes into cheeses manufactured via traditional methods in challenge studies, so few have focused on in situ testing of novel antimicrobials in fresh cheeses. We have developed a modular, miniaturized laboratory-scale queso fresco model for testing the incorporation of novel antilisterials. We have demonstrated the assessment of the antilisterials nisin and ferulic acid, alone and in combination, at various levels. Our results support the inhibitory effects of ferulic acid in cheese, against both L. monocytogenes and its common surrogate Listeria innocua, and we provide preliminary evaluation of its consumer acceptability. PMID:26454301

  2. MODELING ASSUMPTIONS FOR THE ADVANCED TEST REACTOR FRESH FUEL SHIPPING CONTAINER

    SciTech Connect

    Rick J. Migliore

    2009-09-01

    The Advanced Test Reactor Fresh Fuel Shipping Container (ATR FFSC) is currently licensed per 10 CFR 71 to transport a fresh fuel element for either the Advanced Test Reactor, the University of Missouri Research Reactor (MURR), or the Massachusetts Institute of Technology Research Reactor (MITR-II). During the licensing process, the Nuclear Regulatory Commission (NRC) raised a number of issues relating to the criticality analysis, namely (1) lack of a tolerance study on the fuel and packaging, (2) moderation conditions during normal conditions of transport (NCT), (3) treatment of minor hydrogenous packaging materials, and (4) treatment of potential fuel damage under hypothetical accident conditions (HAC). These concerns were adequately addressed by modifying the criticality analysis. A tolerance study was added for both the packaging and fuel elements, full-moderation was included in the NCT models, minor hydrogenous packaging materials were included, and fuel element damage was considered for the MURR and MITR-II fuel types.

  3. Laparoscopic training model using fresh human cadavers without the establishment of penumoperitoneum

    PubMed Central

    Imakuma, Ernesto Sasaki; Ussami, Edson Yassushi; Meyer, Alberto

    2016-01-01

    BACKGROUND: Laparoscopy is a well-established alternative to open surgery for treating many diseases. Although laparoscopy has many advantages, it is also associated with disadvantages, such as slow learning curves and prolonged operation time. Fresh frozen cadavers may be an interesting resource for laparoscopic training, and many institutions have access to cadavers. One of the main obstacles for the use of cadavers as a training model is the difficulty in introducing a sufficient pneumoperitoneum to distend the abdominal wall and provide a proper working space. The purpose of this study was to describe a fresh human cadaver model for laparoscopic training without requiring a pneumoperitoneum. MATERIALS AND METHODS AND RESULTS: A fake abdominal wall device was developed to allow for laparoscopic training without requiring a pneumoperitoneum in cadavers. The device consists of a table-mounted retractor, two rail clamps, two independent frame arms, two adjustable handle and rotating features, and two frames of the abdominal wall. A handycam is fixed over a frame arm, positioned and connected through a USB connection to a television and dissector; scissors and other laparoscopic materials are positioned inside trocars. The laparoscopic procedure is thus simulated. CONCLUSION: Cadavers offer a very promising and useful model for laparoscopic training. We developed a fake abdominal wall device that solves the limitation of space when performing surgery on cadavers and removes the need to acquire more costly laparoscopic equipment. This model is easily accessible at institutions in developing countries, making it one of the most promising tools for teaching laparoscopy. PMID:27073318

  4. Experimenting model deconstruction

    NASA Astrophysics Data System (ADS)

    Seeger, Manuel; Wirtz, Stefan; Ali, Mazhar

    2013-04-01

    Physical soil erosion models describe erosion and transport of solids by flowing water as the interaction of the soils' resistivity to be eroded, the force of the water to entrain particles and its capacity to transport them in suspension. This has lead to concepts in which hydraulic parameters as flow velocity or composite parameters such as shear stress, stream power etc. are set into a direct relation to erosion and sediment transport. Soils' resistivity to erosion is in general represented as a threshold problem, in which a critical force is trespassed and the following increase of erosion depends on the characteristics of the sediments and the flowing water. Despite considerable efforts, these model concepts have not been able to produce more reliable and accurate reproduction and forecast of soil erosion than "simple" empirical models such as the USLE and its derivates. And there is still a lack in knowledge about the reasons for this failure. A considerable number of studies have addressed the following questions: 1) What are the main parameters of soils and flowing water influencing soil erosion?, 2) What relationship do these parameters have with the intensity and different types of soil erosion?, but only few researchers have faced the consequence: 3) Are the present concepts suitable to describe and quantify soil erosion accurately? Similar to other studies, we investigated the influence of basic parameters as grain size, slope, discharge and flow velocity on sediment transport by shallow flowing water in laboratory experiments. Variable flow was applied under different slopes on non-cohesive mobile beds. But in addition, field experiments were designed to quantify the hydraulic and erosive effects of small rills in the field. Here, small existing rills were flushed with defined flows, and flow velocity as well as transported sediments was quantified. The laboratory flume experiments clearly show a strong interaction of flow velocity, the size of the

  5. Detection of Talaromyces marneffei from Fresh Tissue of an Inhalational Murine Pulmonary Model Using Nested PCR

    PubMed Central

    Liu, Yinghui; Huang, Xiaowen; Yi, Xiuwen; He, Ya; Mylonakis, Eleftherios; Xi, Liyan

    2016-01-01

    Penicilliosis marneffei, often consecutive to the aspiration of Talaromyces marneffei (Penicillium marneffei), continues to be one of the significant causes of morbidity and mortality in immunocompromised patients in endemic regions such as Southeast Asia. Improving the accuracy of diagnosing this disease would aid in reducing the mortality of associated infections. In this study, we developed a stable and reproducible murine pulmonary model that mimics human penicilliosis marneffei using a nebulizer to deliver Talaromyces marneffei (SUMS0152) conidia to the lungs of BALB/c nude mice housed in exposure chamber. Using this model, we further revealed that nested PCR was sensitive and specific for detecting Talaromyces marneffei in bronchoalveolar lavage fluid and fresh tissues. This inhalation model may provide a more representative analysis tool for studying the development of penicilliosis marneffei, in addition to revealing that nested PCR has a predictive value in reflecting pulmonary infection. PMID:26886887

  6. Modelling the influence of time and temperature on the respiration rate of fresh oyster mushrooms.

    PubMed

    Azevedo, Sílvia; Cunha, Luís M; Fonseca, Susana C

    2015-12-01

    The respiration rate of mushrooms is an important indicator of postharvest senescence. Storage temperature plays a major role in their rate of respiration and, therefore, in their postharvest life. In this context, reliable predictions of respiration rates are critical for the development of modified atmosphere packaging that ultimately will maximise the quality of the product to be presented to consumers. This work was undertaken to study the influence of storage time and temperature on the respiration rate of oyster mushrooms. For that purpose, oyster mushrooms were stored at constant temperatures of 2, 6, 10, 14 and 18 ℃ under ambient atmosphere. Respiration rate data were measured with 8-h intervals up to 240 h. A decrease of respiration rate was found after cutting of the carpophores. Therefore, time effect on respiration rate was modelled using a first-order decay model. The results also show the positive influence of temperature on mushroom respiration rate. The model explaining the effect of time on oyster mushroom's respiration rate included the temperature dependence according to the Arrhenius equation, and the inclusion of a parameter describing the decrease of the respiration rate, from the initial time until equilibrium. These yielded an overall model that fitted well to the experimental data. Moreover, results show that the overall model is useful to predict respiration rate of oyster mushrooms at different temperatures and times, using the initial respiration rate of mushrooms. Furthermore, predictive modelling can be relevant for the choice of an appropriate packaging system for fresh oyster mushrooms. PMID:25339381

  7. Modelling the influence of time and temperature on the respiration rate of fresh oyster mushrooms.

    PubMed

    Azevedo, Sílvia; Cunha, Luís M; Fonseca, Susana C

    2015-12-01

    The respiration rate of mushrooms is an important indicator of postharvest senescence. Storage temperature plays a major role in their rate of respiration and, therefore, in their postharvest life. In this context, reliable predictions of respiration rates are critical for the development of modified atmosphere packaging that ultimately will maximise the quality of the product to be presented to consumers. This work was undertaken to study the influence of storage time and temperature on the respiration rate of oyster mushrooms. For that purpose, oyster mushrooms were stored at constant temperatures of 2, 6, 10, 14 and 18 ℃ under ambient atmosphere. Respiration rate data were measured with 8-h intervals up to 240 h. A decrease of respiration rate was found after cutting of the carpophores. Therefore, time effect on respiration rate was modelled using a first-order decay model. The results also show the positive influence of temperature on mushroom respiration rate. The model explaining the effect of time on oyster mushroom's respiration rate included the temperature dependence according to the Arrhenius equation, and the inclusion of a parameter describing the decrease of the respiration rate, from the initial time until equilibrium. These yielded an overall model that fitted well to the experimental data. Moreover, results show that the overall model is useful to predict respiration rate of oyster mushrooms at different temperatures and times, using the initial respiration rate of mushrooms. Furthermore, predictive modelling can be relevant for the choice of an appropriate packaging system for fresh oyster mushrooms.

  8. Numerical modelling and hydrochemical characterisation of a fresh-water lens in the Belgian coastal plain

    NASA Astrophysics Data System (ADS)

    Vandenbohede, A.; Lebbe, L.

    2002-05-01

    The distribution of fresh and salt water in coastal aquifers is influenced by many processes. The influence of aquifer heterogeneity and human interference such as land reclamation is illustrated in the Belgian coastal plain where, around A.D. 1200, the reclamation of a tidally influenced environment was completed. The aquifer, which was filled with salt water, was thereafter freshened. The areal distribution of peat, clay, silt and sand influences the general flow and distribution of fresh and salt water along with the drainage pattern and results in the development of fresh-water lenses. The water quality in and around the fresh-water lenses below an inverted tidal channel ridge is surveyed. The hydrochemical evolution of the fresh water lens is reconstructed, pointing to cation exchange, solution of calcite and the oxidation of organic material as the major chemical reactions. The formation and evolution of the fresh water lens is modelled using a two-dimensional density-dependent solute transport model and the sensitivity of drainage and conductivities are studied. Drainage level mainly influences the depth of the fresh-water lens, whereas the time of formation is mainly influenced by conductivity. Résumé. La répartition de l'eau douce et de l'eau salée dans les aquifères littoraux est influencée par de nombreux mécanismes. L'influence de l'hétérogénéité de l'aquifère et des interférences anthropiques telles que la mise en valeur des terres est illustrée par la plaine côtière belge où, depuis l'an 1200, on a mis en valeur un environnement soumis aux marées. L'aquifère, qui contenait de l'eau salée, contient maintenant de l'eau douce. La distribution spatiale de tourbe, d'argile, de silt et de sable joue un rôle dans l'écoulement général et dans la répartition de l'eau douce et de l'eau salée le long du réseau de drainage et produit des lentilles d'eau douce. La qualité de l'eau dans et autour des lentilles d'eau douce sous une lev

  9. Electron-beam inactivation of a norovirus surrogate in fresh produce and model systems.

    PubMed

    Sanglay, Gabriel C; Li, Jianrong; Uribe, R M; Lee, Ken

    2011-07-01

    Norovirus remains the leading cause of foodborne illness, but there is no effective intervention to eliminate viral contaminants in fresh produce. Murine norovirus 1 (MNV-1) was inoculated in either 100 ml of liquid or 100 g of food. The inactivation of MNV-1 by electron-beam (e-beam), or high-energy electrons, at varying doses was measured in model systems (phosphate-buffered saline [PBS], Dulbecco's modified Eagle's medium [DMEM]) or from fresh foods (shredded cabbage, diced strawberries). E-beam was applied at a current of 1.5 mA, with doses of 0, 2, 4, 6, 8, 10, and 12 kGy. The surviving viral titer was determined by plaque assays in RAW 264.7 cells. In PBS and DMEM, e-beam at 0 and 2 kGy provided less than a 1-log reduction of virus. At doses of 4, 6, 8, 10, and 12 kGy, viral inactivation in PBS ranged from 2.37 to 6.40 log, while in DMEM inactivation ranged from 1.40 to 3.59 log. Irradiation of inoculated cabbage showed up to a 1-log reduction at 4 kGy, and less than a 3-log reduction at 12 kGy. On strawberries, less than a 1-log reduction occurred at doses up to 6 kGy, with a maximum reduction of 2.21 log at 12 kGy. These results suggest that a food matrix might provide increased survival for viruses. In foods, noroviruses are difficult to inactivate because of the protective effect of the food matrix, their small sizes, and their highly stable viral capsid. PMID:21740718

  10. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    NASA Astrophysics Data System (ADS)

    de Lange, W. J.

    2014-05-01

    Wim J. de Lange, Geert F. Prinsen, Jacco H. Hoogewoud, Ab A Veldhuizen, Joachim Hunink, Erik F.W. Ruijgh, Timo Kroon Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses

  11. Finite-difference model to simulate the areal flow of saltwater and fresh water separated by an interface

    USGS Publications Warehouse

    Mercer, James W.; Larson, S.P.; Faust, Charles R.

    1980-01-01

    Model documentation is presented for a two-dimensional (areal) model capable of simulating ground-water flow of salt water and fresh water separated by an interface. The partial differential equations are integrated over the thicknesses of fresh water and salt water resulting in two equations describing the flow characteristics in the areal domain. These equations are approximated using finite-difference techniques and the resulting algebraic equations are solved for the dependent variables, fresh water head and salt water head. An iterative solution method was found to be most appropriate. The program is designed to simulate time-dependent problems such as those associated with the development of coastal aquifers, and can treat water-table conditions or confined conditions with steady-state leakage of fresh water. The program will generally be most applicable to the analysis of regional aquifer problems in which the zone between salt water and fresh water can be considered a surface (sharp interface). Example problems and a listing of the computer code are included. (USGS).

  12. Murine Model of Buckwheat Allergy by Intragastric Sensitization with Fresh Buckwheat Flour Extract

    PubMed Central

    Oh, Sejo; Lee, Kisun; Jang, Young-Ju; Sohn, Myung-Hyun; Lee, Kyoung-En; Kim, Kyu-Earn

    2005-01-01

    Food allergies affect about 4% of the Korean population, and buckwheat allergy is one of the most severe food allergies in Korea. The purpose of the present study was to develop a murine model of IgE-mediated buckwheat hypersensitivity induced by intragastric sensitization. Young female C3H/HeJ mice were sensitized and challenged intragastricly with fresh buckwheat flour (1, 5, 25 mg/dose of proteins) mixed in cholera toxin, followed by intragastric challenge. Anaphylactic reactions, antigen-specific antibodies, splenocytes proliferation assays and cytokine productions were evaluated. Oral buckwheat challenges of sensitized mice provoked anaphylactic reactions such as severe scratch, perioral/periorbital swellings, or decreased activity. Reactions were associated with elevated levels of buckwheat-specific IgE antibodies. Splenocytes from buckwheat allergic mice exhibited significantly greater proliferative responses to buckwheat than non-allergic mice. Buckwheat-stimulated IL-4, IL-5, and INF-γ productions were associated with elevated levels of buckwheat-specific IgE in sensitized mice. In this model, 1 mg and 5 mg dose of sensitization produced almost the same degree of Th2-directed immune response, however, a 25 mg dose showed blunted antibody responses. In conclusion, we developed IgE-mediated buckwheat allergy by intragastric sensitization and challenge, and this model could provide a good tool for future studies. PMID:16100445

  13. Fresh Kills leachate treatment and minimization study: Volume 2, Modeling, monitoring and evaluation. Final report

    SciTech Connect

    Fillos, J.; Khanbilvardi, R.

    1993-09-01

    The New York City Department of Sanitation is developing a comprehensive landfill leachate management plan for the Fresh Kills landfill, located on the western shore of Staten Island, New York. The 3000-acre facility, owned and operated by the City of New York, has been developed into four distinct mounds that correspond to areas designated as Sections 1/9, 2/8, 3/4 and 6/7. In developing a comprehensive leachate management plan, the estimating leachate flow rates is important in designing appropriate treatment alternatives to reduce the offsite migration that pollutes both surface water and groundwater resources.Estimating the leachate flow rates from Sections 1/9 and 6/7 was given priority using an available model, hydrologic evaluation of landfill performance (HELP), and a new model, flow investigation for landfill leachate (FILL). The field-scale analysis for leachate flow included data collection of the leachate mound-level from piezometers and monitoring wells installed on-site, for six months period. From the leachate mound-head contours and flow-gradients, Leachate flow rates were computed using Darcy`s Law.

  14. Litchi freshness rapid non-destructive evaluating method using electronic nose and non-linear dynamics stochastic resonance model

    PubMed Central

    Ying, Xiaoguo; Liu, Wei; Hui, Guohua

    2015-01-01

    In this paper, litchi freshness rapid non-destructive evaluating method using electronic nose (e-nose) and non-linear stochastic resonance (SR) was proposed. EN responses to litchi samples were continuously detected for 6 d Principal component analysis (PCA) and non-linear stochastic resonance (SR) methods were utilized to analyze EN detection data. PCA method could not totally discriminate litchi samples, while SR signal-to-noise ratio (SNR) eigen spectrum successfully discriminated all litchi samples. Litchi freshness predictive model developed using SNR eigen values shows high predictive accuracy with regression coefficients R2 = 0 .99396. PMID:25920547

  15. A comprehensive sharp-interface simulation-optimization model for fresh and saline groundwater management in coastal areas

    NASA Astrophysics Data System (ADS)

    Park, Namsik; Shi, Lei

    2015-09-01

    Both fresh and saline groundwater may be of some value to coastal communities. A comprehensive simulation-optimization model was developed to identify optimal solutions for managing both types of groundwater in coastal areas. The model may be used for conventional management problems of fresh groundwater development and of seawater intrusion control. In addition, the model can be used for problems of concurrent development of fresh and saline/brackish groundwater for beneficial uses. A set of hypothetical examples is given to demonstrate the applicability of the proposed model. In the protection of an over-exploiting freshwater pumping well, the saltwater pumping scheme was less efficient than the freshwater injection scheme. Although the former scheme may be more advantageous in some limited cases, the latter should be considered first as it retains more freshwater in the aquifer. The example of the concurrent development of fresh and brackish groundwater exhibited two different sets of optimal solutions: one with a large amount of freshwater and a small amount of brackish water with high salinity, and the other with a small amount of freshwater and a large amount of brackish water with low salinity.

  16. A production planning model considering uncertain demand using two-stage stochastic programming in a fresh vegetable supply chain context.

    PubMed

    Mateo, Jordi; Pla, Lluis M; Solsona, Francesc; Pagès, Adela

    2016-01-01

    Production planning models are achieving more interest for being used in the primary sector of the economy. The proposed model relies on the formulation of a location model representing a set of farms susceptible of being selected by a grocery shop brand to supply local fresh products under seasonal contracts. The main aim is to minimize overall procurement costs and meet future demand. This kind of problem is rather common in fresh vegetable supply chains where producers are located in proximity either to processing plants or retailers. The proposed two-stage stochastic model determines which suppliers should be selected for production contracts to ensure high quality products and minimal time from farm-to-table. Moreover, Lagrangian relaxation and parallel computing algorithms are proposed to solve these instances efficiently in a reasonable computational time. The results obtained show computational gains from our algorithmic proposals in front of the usage of plain CPLEX solver. Furthermore, the results ensure the competitive advantages of using the proposed model by purchase managers in the fresh vegetables industry. PMID:27386288

  17. We Think You Need a Vacation...: The Discipline Model at Fresh Youth Initiatives

    ERIC Educational Resources Information Center

    Afterschool Matters, 2003

    2003-01-01

    Fresh Youth Initiative (FYI) is a youth development organization based in the Washington Heights-Inwood section of Manhattan. The group's mission is to support and encourage the efforts of neighborhood young people and their families to design and carry out community service and social action projects, develop leadership skills, fulfill their…

  18. CFD modeling to improve safe and efficient distribution of chlorine dioxide gas for packaging fresh produce

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The efficiency of the packaging system in inactivating food borne pathogens and prolonging the shelf life of fresh-cut produce is influenced by the design of the package apart from material and atmospheric conditions. Three different designs were considered to determine a specific package design ens...

  19. A new conceptual model on the fate and controls of fresh and pyrolized plant litter decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The leaching of dissolved organic matter (DOM) from fresh and pyrolyzed aboveground plant inputs to the soil is a major pathway by which decomposing aboveground plant material contributes to soil organic matter formation. Understanding how aboveground plant input chemical traits control the partiti...

  20. An artificial intelligence approach for modeling volume and fresh weight of callus - A case study of cumin (Cuminum cyminum L.).

    PubMed

    Mansouri, Ali; Fadavi, Ali; Mortazavian, Seyed Mohammad Mahdi

    2016-05-21

    Cumin (Cuminum cyminum Linn.) is valued for its aroma and its medicinal and therapeutic properties. A supervised feedforward artificial neural network (ANN) trained with back propagation algorithms, was applied to predict fresh weight and volume of Cuminum cyminum L. calli. Pearson correlation coefficient was used to evaluate input/output dependency of the eleven input parameters. Area, feret diameter, minor axis length, perimeter and weighted density parameters were chosen as input variables. Different training algorithms, transfer functions, number of hidden nodes and training iteration were studied to find out the optimum ANN structure. The network with conjugate gradient fletcher-reeves (CGF) algorithm, tangent sigmoid transfer function, 17 hidden nodes and 2000 training epochs was selected as the final ANN model. The final model was able to predict the fresh weight and volume of calli more precisely relative to multiple linear models. The results were confirmed by R(2)≥0.89, R(i)≥0.94 and T value ≥0.86. The results for both volume and fresh weight values showed that almost 90% of data had an acceptable absolute error of ±5%.

  1. An artificial intelligence approach for modeling volume and fresh weight of callus - A case study of cumin (Cuminum cyminum L.).

    PubMed

    Mansouri, Ali; Fadavi, Ali; Mortazavian, Seyed Mohammad Mahdi

    2016-05-21

    Cumin (Cuminum cyminum Linn.) is valued for its aroma and its medicinal and therapeutic properties. A supervised feedforward artificial neural network (ANN) trained with back propagation algorithms, was applied to predict fresh weight and volume of Cuminum cyminum L. calli. Pearson correlation coefficient was used to evaluate input/output dependency of the eleven input parameters. Area, feret diameter, minor axis length, perimeter and weighted density parameters were chosen as input variables. Different training algorithms, transfer functions, number of hidden nodes and training iteration were studied to find out the optimum ANN structure. The network with conjugate gradient fletcher-reeves (CGF) algorithm, tangent sigmoid transfer function, 17 hidden nodes and 2000 training epochs was selected as the final ANN model. The final model was able to predict the fresh weight and volume of calli more precisely relative to multiple linear models. The results were confirmed by R(2)≥0.89, R(i)≥0.94 and T value ≥0.86. The results for both volume and fresh weight values showed that almost 90% of data had an acceptable absolute error of ±5%. PMID:26987421

  2. PETN ignition experiments and models.

    PubMed

    Hobbs, Michael L; Wente, William B; Kaneshige, Michael J

    2010-04-29

    Ignition experiments from various sources, including our own laboratory, have been used to develop a simple ignition model for pentaerythritol tetranitrate (PETN). The experiments consist of differential thermal analysis, thermogravimetric analysis, differential scanning calorimetry, beaker tests, one-dimensional time to explosion tests, Sandia's instrumented thermal ignition tests (SITI), and thermal ignition of nonelectrical detonators. The model developed using this data consists of a one-step, first-order, pressure-independent mechanism used to predict pressure, temperature, and time to ignition for various configurations. The model was used to assess the state of the degraded PETN at the onset of ignition. We propose that cookoff violence for PETN can be correlated with the extent of reaction at the onset of ignition. This hypothesis was tested by evaluating metal deformation produced from detonators encased in copper as well as comparing postignition photos of the SITI experiments. PMID:20361790

  3. Modelling spoilage of fresh turbot and evaluation of a time-temperature integrator (TTI) label under fluctuating temperature.

    PubMed

    Nuin, Maider; Alfaro, Begoña; Cruz, Ziortza; Argarate, Nerea; George, Susie; Le Marc, Yvan; Olley, June; Pin, Carmen

    2008-10-31

    Kinetic models were developed to predict the microbial spoilage and the sensory quality of fresh fish and to evaluate the efficiency of a commercial time-temperature integrator (TTI) label, Fresh Check(R), to monitor shelf life. Farmed turbot (Psetta maxima) samples were packaged in PVC film and stored at 0, 5, 10 and 15 degrees C. Microbial growth and sensory attributes were monitored at regular time intervals. The response of the Fresh Check device was measured at the same temperatures during the storage period. The sensory perception was quantified according to a global sensory indicator obtained by principal component analysis as well as to the Quality Index Method, QIM, as described by Rahman and Olley [Rahman, H.A., Olley, J., 1984. Assessment of sensory techniques for quality assessment of Australian fish. CSIRO Tasmanian Regional Laboratory. Occasional paper n. 8. Available from the Australian Maritime College library. Newnham. Tasmania]. Both methods were found equally valid to monitor the loss of sensory quality. The maximum specific growth rate of spoilage bacteria, the rate of change of the sensory indicators and the rate of change of the colour measurements of the TTI label were modelled as a function of temperature. The temperature had a similar effect on the bacteria, sensory and Fresh Check kinetics. At the time of sensory rejection, the bacterial load was ca. 10(5)-10(6) cfu/g. The end of shelf life indicated by the Fresh Check label was close to the sensory rejection time. The performance of the models was validated under fluctuating temperature conditions by comparing the predicted and measured values for all microbial, sensory and TTI responses. The models have been implemented in a Visual Basic add-in for Excel called "Fish Shelf Life Prediction (FSLP)". This program predicts sensory acceptability and growth of spoilage bacteria in fish and the response of the TTI at constant and fluctuating temperature conditions. The program is freely

  4. Experimental evaluation of four infiltration models for calcareous soil irrigated with treated untreated grey water and fresh water

    NASA Astrophysics Data System (ADS)

    Gharaibeh, M. A.; Eltaif, N. I.; Alrababah, M. A.; Alhamad, M. N.

    2009-04-01

    Infiltration is vital for both irrigated and rainfed agriculture. The knowledge of infiltration characteristics of a soil is the basic information required for designing an efficient irrigation system. The objective of the present study was to model soil infiltration using four models: Green and Ampt, Horton, Kostaikov and modified Kostiakov. Infiltration tests were conducted on field plot irrigated with treated, untreated greywater and fresh water. The field water infiltration data used in these models were based on double ring infiltrometer tests conducted for 4 h. The algebraic parameters of the infiltration models and nonlinear least squares regression were fitted using measured infiltration time [I (t)] data. Among process-based infiltration models, the Horton model performed best and matched the measured I (t) data with lower sum of squares (SS).

  5. Development of a hierarchical Bayesian model to estimate the growth parameters of Listeria monocytogenes in minimally processed fresh leafy salads.

    PubMed

    Crépet, Amélie; Stahl, Valérie; Carlin, Frédéric

    2009-05-31

    The optimal growth rate mu(opt) of Listeria monocytogenes in minimally processed (MP) fresh leafy salads was estimated with a hierarchical Bayesian model at (mean+/-standard deviation) 0.33+/-0.16 h(-1). This mu(opt) value was much lower on average than that in nutrient broth, liquid dairy, meat and seafood products (0.7-1.3 h(-1)), and of the same order of magnitude as in cheese. Cardinal temperatures T(min), T(opt) and T(max) were determined at -4.5+/-1.3 degrees C, 37.1+/-1.3 degrees C and 45.4+/-1.2 degrees C respectively. These parameters were determined from 206 growth curves of L. monocytogenes in MP fresh leafy salads (lettuce including iceberg lettuce, broad leaf endive, curly leaf endive, lamb's lettuce, and mixtures of them) selected in the scientific literature and in technical reports. The adequacy of the model was evaluated by comparing observed data (bacterial concentrations at each experimental time for the completion of the 206 growth curves, mean log(10) increase at selected times and temperatures, L. monocytogenes concentrations in naturally contaminated MP iceberg lettuce) with the distribution of the predicted data generated by the model. The sensitivity of the model to assumptions about the prior values also was tested. The observed values mostly fell into the 95% credible interval of the distribution of predicted values. The mu(opt) and its uncertainty determined in this work could be used in quantitative microbial risk assessment for L. monocytogenes in minimally processed fresh leafy salads.

  6. Numerical Modeling of LCROSS experiment

    NASA Astrophysics Data System (ADS)

    Sultanov, V. G.; Kim, V. V.; Matveichev, A. V.; Zhukov, B. G.; Lomonosov, I. V.

    2009-06-01

    The mission objectives of the Lunar Crater Observation and Sensing Satellite (LCROSS) include confirming the presence or absence of water ice in a permanently shadowed crater in the Moon's polar regions. In this research we present results of numerical modeling of forthcoming LCROSS experiment. The parallel FPIC3D gas dynamic code with implemented realistic equations of state (EOS) and constitutive relations [1] was used. New wide--range EOS for lunar ground was developed. We carried out calculations of impact of model body on the lunar surface at different angels. Situations of impact on dry and water ice--contained lunar ground were also taken into account. Modeling results are given for crater's shape and size along with amount of ejecta. [4pt] [1] V.E. Fortov, V.V. Kim, I.V. Lomonosov, A.V. Matveichev, A.V. Ostrik. Numerical modeling of hypervelocity impacts, Intern J Impact Engeneering, 33, 244-253 (2006)

  7. A comparison of the coupled fresh water-salt water flow and the Ghyben-Herzberg sharp interface approaches to modeling of transient behavior in coastal aquifer systems

    USGS Publications Warehouse

    Essaid, H.I.

    1986-01-01

    A quasi-three dimensional finite difference model which simulates coupled, fresh water and salt water flow, separated by a sharp interface, is used to investigate the effects of storage characteristics, transmissivity, boundary conditions and anisotropy on the transient responses of such flow systems. The magnitude and duration of the departure of aquifer response from the behavior predicted using the Ghyben-Herzberg, one-fluid approach is a function of the ease with which flow can be induced in the salt water region. In many common hydrogeologic settings short-term fresh water head responses, and transitional responses between short-term and long-term, can only be realistically reproduced by including the effects of salt water flow on the dynamics of coastal flow systems. The coupled fresh water-salt water flow modeling approach is able to reproduce the observed annual fresh water head response of the Waialae aquifer of southeastern Oahu, Hawaii. ?? 1986.

  8. Application of the distributed activation energy model to the kinetic study of pyrolysis of the fresh water algae Chlorococcum humicola.

    PubMed

    Kirtania, Kawnish; Bhattacharya, Sankar

    2012-03-01

    Apart from capturing carbon dioxide, fresh water algae can be used to produce biofuel. To assess the energy potential of Chlorococcum humicola, the alga's pyrolytic behavior was studied at heating rates of 5-20K/min in a thermobalance. To model the weight loss characteristics, an algorithm was developed based on the distributed activation energy model and applied to experimental data to extract the kinetics of the decomposition process. When the kinetic parameters estimated by this method were applied to another set of experimental data which were not used to estimate the parameters, the model was capable of predicting the pyrolysis behavior, in the new set of data with a R(2) value of 0.999479. The slow weight loss, that took place at the end of the pyrolysis process, was also accounted for by the proposed algorithm which is capable of predicting the pyrolysis kinetics of C. humicola at different heating rates.

  9. Validation of a predictive model coupling gas transfer and microbial growth in fresh food packed under modified atmosphere.

    PubMed

    Guillard, V; Couvert, O; Stahl, V; Hanin, A; Denis, C; Huchet, V; Chaix, E; Loriot, C; Vincelot, T; Thuault, D

    2016-09-01

    Predicting microbial safety of fresh products in modified atmosphere packaging implies to take into account the dynamic of O2, CO2 and N2 exchanges in the system and its effect on microbial growth. In this paper a mechanistic model coupling gas transfer and predictive microbiology was validated using dedicated challenge-tests performed on poultry meat, fresh salmon and processed cheese, inoculated with either Listeria monocytogenes or Pseudomonas fluorescens and packed in commercially used packaging materials (tray + lid films). The model succeeded in predicting the relative variation of O2, CO2 and N2 partial pressure in headspace and the growth of the studied microorganisms without any parameter identification. This work highlighted that the respiration of the targeted microorganism itself and/or that of the naturally present microflora could not be neglected in most of the cases, and could, in the particular case of aerobic microbes contribute to limit the growth by removing all residual O2 in the package. This work also confirmed the low sensitivity of L. monocytogenes toward CO2 while that of P. fluorescens permitted to efficiently prevent its growth by choosing the right combination of packaging gas permeability value and initial % of CO2 initially flushed in the pack. PMID:27217358

  10. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1993-01-01

    Modeling plays a vital role in providing physical insights into behavior revealed by experiment. The program at the University of Illinois is designed to improve our understanding of basic combustion phenomena through the analytical and numerical modeling of a variety of configurations undergoing experimental study in NASA's microgravity combustion program. Significant progress has been made in two areas: (1) flame-balls, studied experimentally by Ronney and his co-workers; (2) particle-cloud flames studied by Berlad and his collaborators. Additional work is mentioned below. NASA funding for the U. of Illinois program commenced in February 1991 but work was initiated prior to that date and the program can only be understood with this foundation exposed. Accordingly, we start with a brief description of some key results obtained in the pre - 2/91 work.

  11. Methodology for modeling the disinfection efficiency of fresh-cut leafy vegetables wash water applied on peracetic acid combined with lactic acid.

    PubMed

    Van Haute, S; López-Gálvez, F; Gómez-López, V M; Eriksson, Markus; Devlieghere, F; Allende, Ana; Sampers, I

    2015-09-01

    A methodology to i) assess the feasibility of water disinfection in fresh-cut leafy greens wash water and ii) to compare the disinfectant efficiency of water disinfectants was defined and applied for a combination of peracetic acid (PAA) and lactic acid (LA) and comparison with free chlorine was made. Standardized process water, a watery suspension of iceberg lettuce, was used for the experiments. First, the combination of PAA+LA was evaluated for water recycling. In this case disinfectant was added to standardized process water inoculated with Escherichia coli (E. coli) O157 (6logCFU/mL). Regression models were constructed based on the batch inactivation data and validated in industrial process water obtained from fresh-cut leafy green processing plants. The UV254(F) was the best indicator for PAA decay and as such for the E. coli O157 inactivation with PAA+LA. The disinfection efficiency of PAA+LA increased with decreasing pH. Furthermore, PAA+LA efficacy was assessed as a process water disinfectant to be used within the washing tank, using a dynamic washing process with continuous influx of E. coli O157 and organic matter in the washing tank. The process water contamination in the dynamic process was adequately estimated by the developed model that assumed that knowledge of the disinfectant residual was sufficient to estimate the microbial contamination, regardless the physicochemical load. Based on the obtained results, PAA+LA seems to be better suited than chlorine for disinfecting process wash water with a high organic load but a higher disinfectant residual is necessary due to the slower E. coli O157 inactivation kinetics when compared to chlorine.

  12. Methodology for modeling the disinfection efficiency of fresh-cut leafy vegetables wash water applied on peracetic acid combined with lactic acid.

    PubMed

    Van Haute, S; López-Gálvez, F; Gómez-López, V M; Eriksson, Markus; Devlieghere, F; Allende, Ana; Sampers, I

    2015-09-01

    A methodology to i) assess the feasibility of water disinfection in fresh-cut leafy greens wash water and ii) to compare the disinfectant efficiency of water disinfectants was defined and applied for a combination of peracetic acid (PAA) and lactic acid (LA) and comparison with free chlorine was made. Standardized process water, a watery suspension of iceberg lettuce, was used for the experiments. First, the combination of PAA+LA was evaluated for water recycling. In this case disinfectant was added to standardized process water inoculated with Escherichia coli (E. coli) O157 (6logCFU/mL). Regression models were constructed based on the batch inactivation data and validated in industrial process water obtained from fresh-cut leafy green processing plants. The UV254(F) was the best indicator for PAA decay and as such for the E. coli O157 inactivation with PAA+LA. The disinfection efficiency of PAA+LA increased with decreasing pH. Furthermore, PAA+LA efficacy was assessed as a process water disinfectant to be used within the washing tank, using a dynamic washing process with continuous influx of E. coli O157 and organic matter in the washing tank. The process water contamination in the dynamic process was adequately estimated by the developed model that assumed that knowledge of the disinfectant residual was sufficient to estimate the microbial contamination, regardless the physicochemical load. Based on the obtained results, PAA+LA seems to be better suited than chlorine for disinfecting process wash water with a high organic load but a higher disinfectant residual is necessary due to the slower E. coli O157 inactivation kinetics when compared to chlorine. PMID:26065727

  13. Estimation of Fresh and Salt Water Fluxes and Transports in the Indian Ocean using satellite observations and model simulations

    NASA Astrophysics Data System (ADS)

    Bulusu, Subrahmanyam; Nyadjro, Ebenezer

    2014-05-01

    This study describes the fresh and salt water fluxes and transports in the Indian Ocean using satellite-derived salinity observations from the SMOS (Soil Moisture and Ocean Salinity) and Aquarius missions, and model outputs from the HYbrid Coordinate Ocean Model (HYCOM) and the Simple Ocean Data Assimilation (SODA) Re-analysis. Argo salinity data is used to validate the aforementioned salinity datasets. Salt budget estimations using SMOS salinity data show favorable comparisons with published results, with the potential for additional novel studies when more valid satellite-derived salinity data become available. On seasonal time scales, there is a considerable exchange of salt and fresh waters between the Bay of Bengal (BoB) and the Arabian Sea (AS) and vice versa. The pathways of the high/low salinity waters are identified using satellite observations. The Sea Surface Salinity (SSS) changes in the Southeastern Arabian Sea are as a result of the advection of low salinity waters from the BoB via coastal Kelvin waves. The long term mean salt transport shows seasonal reversals that are more pronounced in the northern Indian Ocean than in the southern Indian Ocean. Meridional salt transport is northward along the Somali Current (SC) in the Arabian Sea and the East India coastal Current (EICC) in the Bay of Bengal during the southwest monsoon season. The opposite holds during the northeast monsoon season. Mean zonal salt transport is of a higher magnitude than the meridional component and shows significant seasonal reversals in the equatorial region. Empirical Orthogonal Function (EOF) analyses of meridional salt transport show that the variability is primarily seasonally driven and is the result of seasonally reversing monsoonal winds and currents. The amplitudes of the EOFs suggest that the Indian Ocean dipole may also influence the variability. Spatially, the most variable regions are along the northeast African coast, and in the eastern Arabian Sea, the Bay of

  14. The effect of different levels of sunflower head pith addition on the properties of model system emulsions prepared from fresh and frozen beef.

    PubMed

    Sariçoban, Cemalettin; Yilmaz, Mustafa Tahsin; Karakaya, Mustafa; Tiske, Sümeyra Sultan

    2010-01-01

    The effect of sunflower head pith on the functional properties of emulsions was studied by using a model system. Oil/water (O/W) model emulsion systems were prepared from fresh and frozen beef by the addition of the pith at five concentrations. Emulsion capacity (EC), stability (ES), viscosity (EV), colour and flow properties of the prepared model system emulsions were analyzed. The pith addition increased the EC and ES and the highest EC and ES values were reached when 5% of pith added; however, further increase in the pith concentration caused an inverse trend in these values. Fresh beef emulsions had higher EC and ES values than did frozen beef emulsions. One percent pith concentration was the critic level for the EV values of fresh beef emulsions. EV values of the emulsions reached a maximum level at 5% pith level, followed by a decrease at 7% pit level.

  15. [Study on modeling method of total viable count of fresh pork meat based on hyperspectral imaging system].

    PubMed

    Wang, Wei; Peng, Yan-Kun; Zhang, Xiao-Li

    2010-02-01

    Once the total viable count (TVC) of bacteria in fresh pork meat exceeds a certain number, it will become pathogenic bacteria. The present paper is to explore the feasibility of hyperspectral imaging technology combined with relevant modeling method for the prediction of TVC in fresh pork meat. For the certain kind of problem that has remarkable nonlinear characteristic and contains few samples, as well as the problem that has large amount of data used to express the information of spectrum and space dimension, it is crucial to choose a logical modeling method in order to achieve good prediction result. Based on the comparative result of partial least-squares regression (PLSR), artificial neural networks (ANNs) and least square support vector machines (LS-SVM), the authors found that the PLSR method was helpless for nonlinear regression problem, and the ANNs method couldn't get approving prediction result for few samples problem, however the prediction models based on LS-SVM can give attention to the little training error and the favorable generalization ability as soon as possible, and can make them well synchronously. Therefore LS-SVM was adopted as the modeling method to predict the TVC of pork meat. Then the TVC prediction model was constructed using all the 512 wavelength data acquired by the hyperspectral imaging system. The determination coefficient between the TVC obtained with the standard plate count for bacterial colonies method and the LS-SVM prediction result was 0.987 2 and 0.942 6 for the samples of calibration set and prediction set respectively, also the root mean square error of calibration (RMSEC) and the root mean square error of prediction (RMSEP) was 0.207 1 and 0.217 6 individually, and the result was considerably better than that of MLR, PLSR and ANNs method. This research demonstrates that using the hyperspectral imaging system coupled with the LS-SVM modeling method is a valid means for quick and nondestructive determination of TVC of pork

  16. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    NASA Astrophysics Data System (ADS)

    de Lange, Wim; Prinsen, Geert.; Hoogewoud, Jacco; Veldhuizen, Ab; Ruijgh, Erik; Kroon, Timo

    2013-04-01

    Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with? by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses. Regional expertise is welcomed in the calibration phase of NHI. It aims to reduce uncertainties by improving the

  17. Configurational diffusion of asphaltenes in fresh and aged catalysts extrudates. [Mathematical configurational diffusion model

    SciTech Connect

    Guin, J.A.; Tarrer, A.R.

    1992-01-01

    The objective of this research is to determine the relationship between the size and shape of coal and petroleum macromolecules and their diffusion rates i.e., effective diffusivities, in catalyst pore structures. That is, how do the effective intrapore diffusivities depend on molecule configuration and pore geometry. This quarter we made a more comprehensive literature survey concerning configurational diffusion in porous catalysts or catalyst supports. A detailed literature review is reported. Also, a mathematical configurational diffusion model was developed. By using this model, the effective diffusivity for model compounds diffusing in porous media and a linear adsorption constant can be determined by fitting experimental data.

  18. A Fresh Look at Flooring Costs. A Report on a Survey of User Experience Compiled by Armstrong Cork Company.

    ERIC Educational Resources Information Center

    Armstrong Cork Co., Lancaster, PA.

    Survey information based on actual flooring installations in several types of buildings and traffic conditions, representing nearly 113 million square feet of actual user experience, is contained in this comprehensive report compiled by the Armstrong Cork Company. The comparative figures provided by these users clearly establish that--(1) the…

  19. Experiments beyond the standard model

    SciTech Connect

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.

  20. How Famine Started in Somalia: A Simple Model of Fresh Water Use

    NASA Astrophysics Data System (ADS)

    Gustafson, K. C.; Motesharrei, S.; Miralles-Wilhelm, F. R.; Kalnay, E.; Rivas, J.; Freshwater Modeling Team

    2011-12-01

    Water is the origin of life on our planet and therefore the most valued and essential natural resource for sustaining life on earth. Human uses of freshwater include not only drinking, washing, and other daily domestic activities, but also manufacturing, energy production, agriculture, aquaculture, etc. Though our planet is 97% water, only a minute portion of this is available as freshwater to be used for anthropogenic necessities and the rate at which the natural system can filter and replenish the freshwater supply cannot compete with the rate of demand. That is why special attention must be given to availability of freshwater at present time and in future. We constructed a fairly elementary model of the water cycle while we were working on a more sophisticated water model that includes several additional details. Upon introducing data from different regions of the world (including United States) into our simple model and running simulations, we can gain insight into the future of water sources and availability of freshwater. In particular, we are able to simulate how drought can result in famine, a catastrophe currently taking place in certain areas of Somalia.

  1. Fresh clouds: A parameterized updraft method for calculating cloud densities in one-dimensional models

    NASA Astrophysics Data System (ADS)

    Wong, Michael H.; Atreya, Sushil K.; Kuhn, William R.; Romani, Paul N.; Mihalka, Kristen M.

    2015-01-01

    Models of cloud condensation under thermodynamic equilibrium in planetary atmospheres are useful for several reasons. These equilibrium cloud condensation models (ECCMs) calculate the wet adiabatic lapse rate, determine saturation-limited mixing ratios of condensing species, calculate the stabilizing effect of latent heat release and molecular weight stratification, and locate cloud base levels. Many ECCMs trace their heritage to Lewis (Lewis, J.S. [1969]. Icarus 10, 365-378) and Weidenschilling and Lewis (Weidenschilling, S.J., Lewis, J.S. [1973]. Icarus 20, 465-476). Calculation of atmospheric structure and gas mixing ratios are correct in these models. We resolve errors affecting the cloud density calculation in these models by first calculating a cloud density rate: the change in cloud density with updraft length scale. The updraft length scale parameterizes the strength of the cloud-forming updraft, and converts the cloud density rate from the ECCM into cloud density. The method is validated by comparison with terrestrial cloud data. Our parameterized updraft method gives a first-order prediction of cloud densities in a “fresh” cloud, where condensation is the dominant microphysical process. Older evolved clouds may be better approximated by another 1-D method, the diffusive-precipitative Ackerman and Marley (Ackerman, A.S., Marley, M.S. [2001]. Astrophys. J. 556, 872-884) model, which represents a steady-state equilibrium between precipitation and condensation of vapor delivered by turbulent diffusion. We re-evaluate observed cloud densities in the Galileo Probe entry site (Ragent, B. et al. [1998]. J. Geophys. Res. 103, 22891-22910), and show that the upper and lower observed clouds at ∼0.5 and ∼3 bars are consistent with weak (cirrus-like) updrafts under conditions of saturated ammonia and water vapor, respectively. The densest observed cloud, near 1.3 bar, requires unexpectedly strong updraft conditions, or higher cloud density rates. The cloud

  2. Investigation of laser tissue welding dynamics via experiment and modeling.

    PubMed

    Small, W; Maitland, D J; Heredia, N J; Eder, D C; Celliers, P M; Da Silva, L B; London, R A; Matthews, D L

    1997-02-01

    An in vitro study of laser tissue welding mediated with a dye-enhanced protein solder was performed. Freshly harvested sections of porcine aorta were used for the experiments. Arteriotomies approximately 4 mm in length were treated using an 805 nm continuous-wave diode laser coupled to a 1-mm diameter fiber. Temperature histories of the surface of the weld site were obtained using a fiberoptic-based infrared thermometer. The experimental effort was complemented by the LATIS (LAser-TISsue) computer code, which numerically simulates the exposure of tissue to near-infrared radiation using coupled Monte Carlo, thermal transport, and mass transport models. Comparison of the experimental and simulated thermal results shows that the inclusion of water transport and evaporative losses in the model is necessary to determine the thermal distributions and hydration state in the tissue. The hydration state of the weld site was correlated with the acute weld strength.

  3. Functional Characterization and Drug Response of Freshly Established Patient-Derived Tumor Models with CpG Island Methylator Phenotype

    PubMed Central

    Maletzki, Claudia; Huehns, Maja; Knapp, Patrick; Waukosin, Nancy; Klar, Ernst; Prall, Friedrich; Linnebacher, Michael

    2015-01-01

    Patient-individual tumor models constitute a powerful platform for basic and translational analyses both in vitro and in vivo. However, due to the labor-intensive and highly time-consuming process, only few well-characterized patient-derived cell lines and/or corresponding xenografts exist. In this study, we describe successful generation and functional analysis of novel tumor models from patients with sporadic primary colorectal carcinomas (CRC) showing CpG island methylator phenotype (CIMP). Initial DNA fingerprint analysis confirmed identity with the patient in all four cases. These freshly established cells showed characteristic features associated with the CIMP-phenotype (HROC40: APCwt, TP53mut, KRASmut; 3/8 marker methylated; HROC43: APCmut, TP53mut, KRASmut; 4/8 marker methylated; HROC60: APCwt, TP53mut, KRASwt; 4/8 marker methylated; HROC183: APCmut, TP53mut, KRASmut; 6/8 marker methylated). Cell lines were of epithelial origin (EpCAM+) with distinct morphology and growth kinetics. Response to chemotherapeutics was quite individual between cells, with stage I-derived cell line HROC60 being most susceptible towards standard clinically approved chemotherapeutics (e.g. 5-FU, Irinotecan). Of note, most cell lines were sensitive towards “non-classical” CRC standard drugs (sensitivity: Gemcitabin > Rapamycin > Nilotinib). This comprehensive analysis of tumor biology, genetic alterations and assessment of chemosensitivity towards a broad range of (chemo-) therapeutics helps bringing forward the concept of personalized tumor therapy. PMID:26618628

  4. Need Of Modelling Radionuclide Transport In Fresh Water Lake System, Subject To Indian Context

    SciTech Connect

    Desai, Hiral; Christian, R. A.

    2010-10-26

    The operations of nuclear facilities results in low level radioactive effluents, which are required to be released into the environment. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. The effluents released from installations into environment undergo dilution and dispersion. However, there is possibility of concentration by the biological process in the environment. Aquatic ecosystems are very complex webs of physical, chemical and biological interactions. It is generally both costly and laborious to describe their characteristics, and to predict them is even harder. Every aquatic ecosystem is unique, and yet it is impossible to study each system in the detail necessary for case-by-case assessment of ecological threats. In this situation, quantitative mathematical models are essential to predict, to guide assessment and to direct interventions.

  5. F-specific RNA bacteriophages are adequate model organisms for enteric viruses in fresh water.

    PubMed Central

    Havelaar, A H; van Olphen, M; Drost, Y C

    1993-01-01

    Culturable enteroviruses were detected by applying concentration techniques and by inoculating the concentrates on the BGM cell line. Samples were obtained from a wide variety of environments, including raw sewage, secondary effluent, coagulated effluent, chlorinated and UV-irradiated effluents, river water, coagulated river water, and lake water. The virus concentrations varied widely between 0.001 and 570/liter. The same cell line also supported growth of reoviruses, which were abundant in winter (up to 95% of the viruses detected) and scarce in summer (less than 15%). The concentrations of three groups of model organisms in relation to virus concentrations were also studied. The concentrations of bacteria (thermotolerant coliforms and fecal streptococci) were significantly correlated with virus concentrations in river water and coagulated secondary effluent, but were relatively low in disinfected effluents and relatively high in surface water open to nonhuman fecal pollution. The concentrations of F-specific RNA bacteriophages (FRNA phages) were highly correlated with virus concentrations in all environments studied except raw and biologically treated sewage. Numerical relationships were consistent over the whole range of environments; the regression equations for FRNA phages on viruses in river water and lake water were statistically equivalent. These relationships support the possibility that enteric virus concentrations can be predicted from FRNA phage data. PMID:8215367

  6. Experiments on a Model Eye

    ERIC Educational Resources Information Center

    Arell, Antti; Kolari, Samuli

    1978-01-01

    Explains a laboratory experiment dealing with the optical features of the human eye. Shows how to measure the magnification of the retina and the refractive anomaly of the eye could be used to measure the refractive power of the observer's eye. (GA)

  7. A New Formulation for Fresh Snow Density over Antarctica for the regional climate model Modèle Atmosphérique Régionale (MAR).

    NASA Astrophysics Data System (ADS)

    Tedesco, M.; Datta, R.; Fettweis, X.; Agosta, C.

    2015-12-01

    Surface-layer snow density is important to processes contributing to surface mass balance, but is highly variable over Antarctica due to a wide range of near-surface climate conditions over the continent. Formulations for fresh snow density have typically either used fixed values or been modeled empirically using field data that is limited to specific seasons or regions. There is also currently limited work exploring how the sensitivity to fresh snow density in regional climate models varies with resolution. Here, we present a new formulation compiled from (a) over 1600 distinct density profiles from multiple sources across Antarctica and (b) near-surface variables from the regional climate model Modèle Atmosphérique Régionale (MAR). Observed values represent coastal areas as well as the plateau, in both West and East Antarctica (although East Antarctica is dominant). However, no measurements are included from the Antarctic Peninsula, which is both highly topographically variable and extends to lower latitudes than the remainder of the continent. In order to assess the applicability of this fresh snow density formulation to the Antarctic Peninsula at high resolutions, a version of MAR is run for several years both at low-resolution at the continental scale and at a high resolution for the Antarctic Peninsula alone. This setup is run both with and without the new fresh density formulation to quantify the sensitivity of the energy balance and SMB components to fresh snow density. Outputs are compared with near-surface atmospheric variables available from AWS stations (provided by the University of Wisconsin Madison) as well as net accumulation values from the SAMBA database (provided from the Laboratoire de Glaciologie et Géophysique de l'Environnement).

  8. Factors controlling the configuration of the fresh-saline water interface in the Dead Sea coastal aquifers: Synthesis of TDEM surveys and numerical groundwater modeling

    USGS Publications Warehouse

    Yechieli, Y.; Kafri, U.; Goldman, M.; Voss, C.I.

    2001-01-01

    TDEM (time domain electromagnetic) traverses in the Dead Sea (DS) coastal aquifer help to delineate the configuration of the interrelated fresh-water and brine bodies and the interface in between. A good linear correlation exists between the logarithm of TDEM resistivity and the chloride concentration of groundwater, mostly in the higher salinity range, close to that of the DS brine. In this range, salinity is the most important factor controlling resistivity. The configuration of the fresh-saline water interface is dictated by the hydraulic gradient, which is controlled by a number of hydrological factors. Three types of irregularities in the configuration of fresh-water and saline-water bodies were observed in the study area: 1. Fresh-water aquifers underlying more saline ones ("Reversal") in a multi-aquifer system. 2. "Reversal" and irregular residual saline-water bodies related to historical, frequently fluctuating DS base level and respective interfaces, which have not undergone complete flushing. A rough estimate of flushing rates may be obtained based on knowledge of the above fluctuations. The occurrence of salt beds is also a factor affecting the interface configuration. 3. The interface steepens towards and adjacent to the DS Rift fault zone. Simulation analysis with a numerical, variable-density flow model, using the US Geological Survey's SUTRA code, indicates that interface steep- ening may result from a steep water-level gradient across the zone, possibly due to a low hydraulic conductivity in the immediate vicinity of the fault.

  9. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1995-01-01

    This program started in February 1991, and is designed to improve our understanding of basic combustion phenomena by the modeling of various configurations undergoing experimental study by others. Results through 1992 were reported in the second workshop. Work since that time has examined the following topics: Flame-balls; Intrinsic and acoustic instabilities in multiphase mixtures; Radiation effects in premixed combustion; Smouldering, both forward and reverse, as well as two dimensional smoulder.

  10. An experiment with interactive planning models

    NASA Technical Reports Server (NTRS)

    Beville, J.; Wagner, J. H.; Zannetos, Z. S.

    1970-01-01

    Experiments on decision making in planning problems are described. Executives were tested in dealing with capital investments and competitive pricing decisions under conditions of uncertainty. A software package, the interactive risk analysis model system, was developed, and two controlled experiments were conducted. It is concluded that planning models can aid management, and predicted uses of the models are as a central tool, as an educational tool, to improve consistency in decision making, to improve communications, and as a tool for consensus decision making.

  11. The Database for Reaching Experiments and Models

    PubMed Central

    Walker, Ben; Kording, Konrad

    2013-01-01

    Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc.) from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by) multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM). DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis. PMID:24244351

  12. Changes in polyphenol profiles and color composition of freshly fermented model wine due to pulsed electric field, enzymes and thermovinification pretreatments.

    PubMed

    El Darra, Nada; Turk, Mohammad F; Ducasse, Marie-Agnès; Grimi, Nabil; Maroun, Richard G; Louka, Nicolas; Vorobiev, Eugène

    2016-03-01

    This work compares the effects of three pretreatments techniques: pulsed electric fields (PEFs), enzymes treatment (ET) and thermovinification (TV) on the improving of extraction of main phenolic compounds, color characteristics (L(∗)a(∗)b(∗)), and composition (copigmentation, non-discolored pigments) of freshly fermented model wine from Cabernet Sauvignon variety. The pretreatments produced differences in the wines, with the color of the freshly fermented model wine obtained from PEF and TV pretreated musts being the most different with an increase of 56% and 62%, respectively, compared to the control, while the color only increased by 22% for ET. At the end of the alcoholic fermentation, the contents of anthocyanins for all the pretreatments were not statistically different. However, for the content of total phenolics and total flavonols, PEF and TV were statistically different, but ET was not statistically different. The contents of flavonols in musts pretreated by PEF and TV were significantly higher comparing to the control. An increase of 48% and 97% was noted respectively, and only 4% for ET. A similar result was observed for the total phenolics with an increase by 18% and 32% respectively for PEF and TV, and only 3% for ET comparing to the control. The results suggest that the higher intensity and the difference of color composition between the control and pretreated freshly fermented model wines were not related only to a higher content of residual native polyphenols in these freshly fermented model wines. Other phenomena such as copigmentation and formation of derived pigments may be favored by these pretreatments. PMID:26471638

  13. Changes in polyphenol profiles and color composition of freshly fermented model wine due to pulsed electric field, enzymes and thermovinification pretreatments.

    PubMed

    El Darra, Nada; Turk, Mohammad F; Ducasse, Marie-Agnès; Grimi, Nabil; Maroun, Richard G; Louka, Nicolas; Vorobiev, Eugène

    2016-03-01

    This work compares the effects of three pretreatments techniques: pulsed electric fields (PEFs), enzymes treatment (ET) and thermovinification (TV) on the improving of extraction of main phenolic compounds, color characteristics (L(∗)a(∗)b(∗)), and composition (copigmentation, non-discolored pigments) of freshly fermented model wine from Cabernet Sauvignon variety. The pretreatments produced differences in the wines, with the color of the freshly fermented model wine obtained from PEF and TV pretreated musts being the most different with an increase of 56% and 62%, respectively, compared to the control, while the color only increased by 22% for ET. At the end of the alcoholic fermentation, the contents of anthocyanins for all the pretreatments were not statistically different. However, for the content of total phenolics and total flavonols, PEF and TV were statistically different, but ET was not statistically different. The contents of flavonols in musts pretreated by PEF and TV were significantly higher comparing to the control. An increase of 48% and 97% was noted respectively, and only 4% for ET. A similar result was observed for the total phenolics with an increase by 18% and 32% respectively for PEF and TV, and only 3% for ET comparing to the control. The results suggest that the higher intensity and the difference of color composition between the control and pretreated freshly fermented model wines were not related only to a higher content of residual native polyphenols in these freshly fermented model wines. Other phenomena such as copigmentation and formation of derived pigments may be favored by these pretreatments.

  14. Modeling choice and valuation in decision experiments.

    PubMed

    Loomes, Graham

    2010-07-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for participants to perceive probabilities and payoffs in a particular way. This model organizes more of the data than any other extant model and generates a number of novel testable implications which are examined with new data.

  15. STELLA Experiment: Design and Model Predictions

    SciTech Connect

    Kimura, W. D.; Babzien, M.; Ben-Zvi, I.; Campbell, L. P.; Cline, D. B.; Fiorito, R. B.; Gallardo, J. C.; Gottschalk, S. C.; He, P.; Kusche, K. P.; Liu, Y.; Pantell, R. H.; Pogorelsky, I. V.; Quimby, D. C.; Robinson, K. E.; Rule, D. W.; Sandweiss, J.; Skaritka, J.; van Steenbergen, A.; Steinhauer, L. C.; Yakimenko, V.

    1998-07-05

    The STaged ELectron Laser Acceleration (STELLA) experiment will be one of the first to examine the critical issue of staging the laser acceleration process. The BNL inverse free electron laser (EEL) will serve as a prebuncher to generate {approx} 1 {micro}m long microbunches. These microbunches will be accelerated by an inverse Cerenkov acceleration (ICA) stage. A comprehensive model of the STELLA experiment is described. This model includes the EEL prebunching, drift and focusing of the microbunches into the ICA stage, and their subsequent acceleration. The model predictions will be presented including the results of a system error study to determine the sensitivity to uncertainties in various system parameters.

  16. Argonne Bubble Experiment Thermal Model Development

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  17. Using Ecosystem Experiments to Improve Vegetation Models

    SciTech Connect

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; Prentice, I. Collin; Thornton, Peter E.; Wang, Shusen; Wang, Yingping; Weng, Ensheng; Iversen, Colleen M.; McCarthy, Heather R.; Warren, Jeffrey; Oren, Ram; Norby, Richard J

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced a clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.

  18. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGESBeta

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; et al

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  19. Experiences with two-equation turbulence models

    NASA Technical Reports Server (NTRS)

    Singhal, Ashok K.; Lai, Yong G.; Avva, Ram K.

    1995-01-01

    This viewgraph presentation discusses the following: introduction to CFD Research Corporation; experiences with two-equation models - models used, numerical difficulties, validation and applications, and strengths and weaknesses; and answers to three questions posed by the workshop organizing committee - what are your customers telling you, what are you doing in-house, and how can NASA-CMOTT (Center for Modeling of Turbulence and Transition) help.

  20. Three dimensional neuronal cell cultures more accurately model voltage gated calcium channel functionality in freshly dissected nerve tissue.

    PubMed

    Lai, Yinzhi; Cheng, Ke; Kisaalita, William

    2012-01-01

    It has been demonstrated that neuronal cells cultured on traditional flat surfaces may exhibit exaggerated voltage gated calcium channel (VGCC) functionality. To gain a better understanding of this phenomenon, primary neuronal cells harvested from mice superior cervical ganglion (SCG) were cultured on two dimensional (2D) flat surfaces and in three dimensional (3D) synthetic poly-L-lactic acid (PLLA) and polystyrene (PS) polymer scaffolds. These 2D- and 3D-cultured cells were compared to cells in freshly dissected SCG tissues, with respect to intracellular calcium increase in response to high K(+) depolarization. The calcium increases were identical for 3D-cultured and freshly dissected, but significantly higher for 2D-cultured cells. This finding established the physiological relevance of 3D-cultured cells. To shed light on the mechanism behind the exaggerated 2D-cultured cells' functionality, transcriptase expression and related membrane protein distributions (caveolin-1) were obtained. Our results support the view that exaggerated VGCC functionality from 2D cultured SCG cells is possibly due to differences in membrane architecture, characterized by uniquely organized caveolar lipid rafts. The practical implication of use of 3D-cultured cells in preclinical drug discovery studies is that such platforms would be more effective in eliminating false positive hits and as such improve the overall yield from screening campaigns.

  1. Modeling a Thermal Seepage Laboratory Experiment

    SciTech Connect

    Y. Zhang; J. Birkholzer

    2004-07-30

    A thermal seepage model has been developed to evaluate the potential for seepage into the waste emplacement drifts at the proposed high-level radioactive materials repository at Yucca Mountain when the rock is at elevated temperature. The coupled-process-model results show that no seepage occurs as long as the temperature at the drift wall is above boiling. This important result has been incorporated into the Total System Performance Assessment of Yucca Mountain. We have applied the same conceptual model to a laboratory heater experiment conducted by the Center for Nuclear Waste Regulatory Analyses. This experiment involves a fractured-porous rock system, composed of concrete slabs, heated by an electric heater placed in a 0.15 m diameter ''drift''. A substantial volume of water was released above the boiling zone over a time period of 135 days, giving rise to vaporization around the heat source. In this study, two basic conceptual models, similar to the thermal seepage models used in the Yucca Mountain Project, a dual-permeability model and an active-fracture model, are set up to predict evolution of temperature and saturation at the ''drift'' crown, and thereby to estimate potential for thermal seepage. Preliminary results from the model show good agreement with temperature profiles as well as with the potential seepage indicated in the lab experiments. These results build confidence in the thermal seepage models used in the Yucca Mountain Project. Different approaches are considered in our conceptual model to implement fracture-matrix interaction. Sensitivity analyses of fracture properties are conducted to help evaluation of uncertainty.

  2. Fresh Soy Oil Protects Against Vascular Changes in an Estrogen-Deficient Rat Model: An Electron Microscopy Study

    PubMed Central

    Adam, Siti Khadijah; Das, Srijit; Othman, Faizah; Jaarin, Kamsiah

    2009-01-01

    OBJECTIVE To observe the effects of consuming repeatedly heated soy oil on the aortic tissues of estrogen-deficient rats. METHODS Thirty female Sprague Dawley rats (200–250 g) were divided equally into five groups. One group served as the normal control (NC) group. The four treated groups were ovariectomized and were fed as follows: 2% cholesterol diet (OVXC); 2% cholesterol diet + fresh soy oil (FSO); 2% cholesterol diet + once-heated soy oil (1HSO); and 2% cholesterol diet + five-times-heated soy oil (5HSO). After four months, the rats were sacrificed, and the aortic tissues were obtained for histological studies. RESULTS After four months of feeding, the NC, FSO and 1HSO groups had a lower body weight gain compared to the OVXC and 5HSO groups. The tunica intima/media ratio in the 5HSO group was significantly thicker (p < 0.05) compared to the NC, OVXC and FSO groups. Electron microscopy showed that endothelial cells were normally shaped in the FSO and NC groups but irregular in the 1HSO and 5HSO groups. A greater number of collagen fibers and vacuoles were observed in the 5HSO group compared to the other treatment groups. CONCLUSIONS Fresh soy oil offered protection in the estrogen-deficient state, as these rats had similar features to those of the NC group. The damage to the tunica intima and the increase in the ratio of tunica intima/media thickness showed the deleterious effect of consuming repeatedly heated soy oil in castrated female rats. PMID:19936186

  3. Anomalous sea surface reverberation scale model experiments.

    PubMed

    Neighbors, T H; Bjørnø, L

    2006-12-22

    Low frequency sea surface sound backscattering from approximately 100 Hz to a few kHz observed from the 1960s broadband measurements using explosive charges to the Critical Sea Test measurements conducted in the 1990 s is substantially higher than explained by rough sea surface scattering theory. Alternative theories for explaining this difference range from scattering by bubble plumes/clouds formed by breaking waves to stochastic scattering from fluctuating bubble layers near the sea surface. In each case, theories focus on reverberation in the absence of the large-scale surface wave height fluctuations that are characteristic of a sea that produces bubble clouds and plumes. At shallow grazing angles, shadowing of bubble plumes and clouds caused by surface wave height fluctuations may induce first order changes in the backscattered signal strength. To understand the magnitude of shadowing effects under controlled and repeatable conditions, scale model experiments were performed in a 3 m x 1.5 m x 1.5 m tank at the Technical University of Denmark. The experiments used a 1 MHz transducer as the source and receiver, a computer controlled data acquisition system, a scale model target, and a surface wave generator. The scattered signal strength fluctuations observed at shallow angles are characteristic of the predicted ocean environment. These experiments demonstrate that shadowing has a first order impact on bubble plume and cloud scattering strength and emphasize the usefulness of model scale experiments for studying underwater acoustic events under controlled conditions.

  4. Data production models for the CDF experiment

    SciTech Connect

    Antos, J.; Babik, M.; Benjamin, D.; Cabrera, S.; Chan, A.W.; Chen, Y.C.; Coca, M.; Cooper, B.; Genser, K.; Hatakeyama, K.; Hou, S.; Hsieh, T.L.; Jayatilaka, B.; Kraan, A.C.; Lysak, R.; Mandrichenko, I.V.; Robson, A.; Siket, M.; Stelzer, B.; Syu, J.; Teng, P.K.; /Kosice, IEF /Duke U. /Taiwan, Inst. Phys. /University Coll. London /Fermilab /Rockefeller U. /Michigan U. /Pennsylvania U. /Glasgow U. /UCLA /Tsukuba U. /New Mexico U.

    2006-06-01

    The data production for the CDF experiment is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production models that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the experiment.

  5. Model-scale sound propagation experiment

    NASA Technical Reports Server (NTRS)

    Willshire, William L., Jr.

    1988-01-01

    The results of a scale model propagation experiment to investigate grazing propagation above a finite impedance boundary are reported. In the experiment, a 20 x 25 ft ground plane was installed in an anechoic chamber. Propagation tests were performed over the plywood surface of the ground plane and with the ground plane covered with felt, styrofoam, and fiberboard. Tests were performed with discrete tones in the frequency range of 10 to 15 kHz. The acoustic source and microphones varied in height above the test surface from flush to 6 in. Microphones were located in a linear array up to 18 ft from the source. A preliminary experiment using the same ground plane, but only testing the plywood and felt surfaces was performed. The results of this first experiment were encouraging, but data variability and repeatability were poor, particularly, for the felt surface, making comparisons with theoretical predictions difficult. In the main experiment the sound source, microphones, microphone positioning, data acquisition, quality of the anechoic chamber, and environmental control of the anechoic chamber were improved. High-quality, repeatable acoustic data were measured in the main experiment for all four test surfaces. Comparisons with predictions are good, but limited by uncertainties of the impedance values of the test surfaces.

  6. Model-scale sound propagation experiment

    NASA Astrophysics Data System (ADS)

    Willshire, William L., Jr.

    1988-04-01

    The results of a scale model propagation experiment to investigate grazing propagation above a finite impedance boundary are reported. In the experiment, a 20 x 25 ft ground plane was installed in an anechoic chamber. Propagation tests were performed over the plywood surface of the ground plane and with the ground plane covered with felt, styrofoam, and fiberboard. Tests were performed with discrete tones in the frequency range of 10 to 15 kHz. The acoustic source and microphones varied in height above the test surface from flush to 6 in. Microphones were located in a linear array up to 18 ft from the source. A preliminary experiment using the same ground plane, but only testing the plywood and felt surfaces was performed. The results of this first experiment were encouraging, but data variability and repeatability were poor, particularly, for the felt surface, making comparisons with theoretical predictions difficult. In the main experiment the sound source, microphones, microphone positioning, data acquisition, quality of the anechoic chamber, and environmental control of the anechoic chamber were improved. High-quality, repeatable acoustic data were measured in the main experiment for all four test surfaces. Comparisons with predictions are good, but limited by uncertainties of the impedance values of the test surfaces.

  7. Influence of the Natural Microbial Flora on the Acid Tolerance Response of Listeria monocytogenes in a Model System of Fresh Meat Decontamination Fluids

    PubMed Central

    Samelis, John; Sofos, John N.; Kendall, Patricia A.; Smith, Gary C.

    2001-01-01

    Depending on its composition and metabolic activity, the natural flora that may be established in a meat plant environment can affect the survival, growth, and acid tolerance response (ATR) of bacterial pathogens present in the same niche. To investigate this hypothesis, changes in populations and ATR of inoculated (105 CFU/ml) Listeria monocytogenes were evaluated at 35°C in water (10 or 85°C) or acidic (2% lactic or acetic acid) washings of beef with or without prior filter sterilization. The model experiments were performed at 35°C rather than lower (≤15°C) temperatures to maximize the response of inoculated L. monocytogenes in the washings with or without competitive flora. Acid solution washings were free (<1.0 log CFU/ml) of natural flora before inoculation (day 0), and no microbial growth occurred during storage (35°C, 8 days). Inoculated L. monocytogenes died off (negative enrichment) in acid washings within 24 h. In nonacid (water) washings, the pathogen increased (approximately 1.0 to 2.0 log CFU/ml), irrespective of natural flora, which, when present, predominated (>8.0 log CFU/ml) by day 1. The pH of inoculated water washings decreased or increased depending on absence or presence of natural flora, respectively. These microbial and pH changes modulated the ATR of L. monocytogenes at 35°C. In filter-sterilized water washings, inoculated L. monocytogenes increased its ATR by at least 1.0 log CFU/ml from days 1 to 8, while in unfiltered water washings the pathogen was acid tolerant at day 1 (0.3 to 1.4 log CFU/ml reduction) and became acid sensitive (3.0 to >5.0 log CFU/ml reduction) at day 8. These results suggest that the predominant gram-negative flora of an aerobic fresh meat plant environment may sensitize bacterial pathogens to acid. PMID:11375145

  8. Quantification of the binding potential of cell-surface receptors in fresh excised specimens via dual-probe modeling of SERS nanoparticles.

    PubMed

    Sinha, Lagnojita; Wang, Yu; Yang, Cynthia; Khan, Altaz; Brankov, Jovan G; Liu, Jonathan T C; Tichauer, Kenneth M

    2015-01-01

    The complete removal of cancerous tissue is a central aim of surgical oncology, but is difficult to achieve in certain cases, especially when the removal of surrounding normal tissues must be minimized. Therefore, when post-operative pathology identifies residual tumor at the surgical margins, re-excision surgeries are often necessary. An intraoperative approach for tumor-margin assessment, insensitive to nonspecific sources of molecular probe accumulation and contrast, is presented employing kinetic-modeling analysis of dual-probe staining using surface-enhanced Raman scattering nanoparticles (SERS NPs). Human glioma (U251) and epidermoid (A431) tumors were implanted subcutaneously in six athymic mice. Fresh resected tissues were stained with an equimolar mixture of epidermal growth factor receptor (EGFR)-targeted and untargeted SERS NPs. The binding potential (BP; proportional to receptor concentration) of EGFR - a cell-surface receptor associated with cancer - was estimated from kinetic modeling of targeted and untargeted NP concentrations in response to serial rinsing. EGFR BPs in healthy, U251, and A431 tissues were 0.06 ± 0.14, 1.13 ± 0.40, and 2.23 ± 0.86, respectively, which agree with flow-cytometry measurements and published reports. The ability of this approach to quantify the BP of cell-surface biomarkers in fresh tissues opens up an accurate new approach to analyze tumor margins intraoperatively.

  9. Modeling Hemispheric Detonation Experiments in 2-Dimensions

    SciTech Connect

    Howard, W M; Fried, L E; Vitello, P A; Druce, R L; Phillips, D; Lee, R; Mudge, S; Roeske, F

    2006-06-22

    Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data, including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.

  10. Fresh Veggies from Space

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Professor Marc Anderson of the University of Wisconsin-Madison developed a technology for use in plant-growth experiments aboard the Space Shuttle. Anderson's research and WCSAR's technology were funded by NASA and resulted in a joint technology licensed to KES Science and Technology, Inc. This transfer of space-age technology resulted in the creation of a new plant-saving product, an ethylene scrubber for plant growth chambers. This innovation presents commercial benefits for the food industry in the form of a new device, named Bio-KES. Bio-KES removes ethylene and helps to prevent spoilage. Ethylene accounts for up to 10 percent of produce losses and 5 percent of flower losses. Using Bio-KES in storage rooms and displays will increase the shelf life of perishable foods by more than one week, drastically reducing the costs associated with discarded rotten foods and flowers. The savings could potentially be passed on to consumers. For NASA, the device means that astronauts can conduct commercial agricultural research in space. Eventually, it may also help to grow food in space and keep it fresh longer. This could lead to less packaged food being taken aboard missions since it could be cultivated in an ethylene-free environment.

  11. Background modeling for the GERDA experiment

    SciTech Connect

    Becerici-Schmidt, N.; Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  12. Background modeling for the GERDA experiment

    NASA Astrophysics Data System (ADS)

    Becerici-Schmidt, N.; Gerda Collaboration

    2013-08-01

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.

  13. Impact polymorphs of quartz: experiments and modelling

    NASA Astrophysics Data System (ADS)

    Price, M. C.; Dutta, R.; Burchell, M. J.; Cole, M. J.

    2013-09-01

    We have used the light gas gun at the University of Kent to perform a series of impact experiments firing quartz projectiles onto metal, quartz and sapphire targets. The aim is to quantify the amount of any high pressure quartz polymorphs produced, and use these data to develop our hydrocode modelling to enable the predict ion of the quantity of polymorphs produced during a planetary scale impact.

  14. Data Assimilation and Model Evaluation Experiment Datasets.

    NASA Astrophysics Data System (ADS)

    Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.

    1994-05-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean modeling and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.

  15. Transplantation of freshly isolated adipose tissue-derived regenerative cells enhances angiogenesis in a murine model of hind limb ischemia.

    PubMed

    Harada, Yusuke; Yamamoto, Yasutaka; Tsujimoto, Shunsuke; Matsugami, Hiromi; Yoshida, Akio; Hisatome, Ichiro

    2013-02-01

    Therapeutic angiogenesis has emerged as one of the most promising therapies for severe ischemic cardiovascular diseases with no optional therapy. Several investigators have reported that transplantation of cultured adipose-derived regenerative cells (cADRCs) to ischemic tissues promotes neovascularization and blood perfusion recovery; however, cell therapy using cultured cells has several restrictions. To resolve this problem, the angiogenic capacity of freshly isolated ADRCs (fADRCs) obtained from Lewis rats was compared with cADRCs, both in vivo and in vitro. Flow cytometric analysis showed that fADRCs contained several cell types such as endothelial progenitor cells and endothelial cells; however, these cells were present in a very small proportion in cADRCs. Transplantation of fADRCs in mice significantly improved blood perfusion, capillary density, and production of several angiogenic factors in transplanted ischemic limbs compared with a saline-injected group, whereas these effects were not observed in the cADRCs-injected group. fADRCs also showed significantly higher expression levels of angiogenic factors than cADRCs in the in vitro study. Furthermore, fADRC stimulated tube formation more remarkably than cADRC in an in vitro tube formation assay. These results suggested that fADRCs have an effective angiogenic capacity, and they would be more valuable as a source for cell-based therapeutic angiogenesis than cADRCs or other stem/progenitor cells.

  16. Ballistic Response of Fabrics: Model and Experiments

    NASA Astrophysics Data System (ADS)

    Orphal, Dennis L.; Walker Anderson, James D., Jr.

    2001-06-01

    Walker (1999)developed an analytical model for the dynamic response of fabrics to ballistic impact. From this model the force, F, applied to the projectile by the fabric is derived to be F = 8/9 (ET*)h^3/R^2, where E is the Young's modulus of the fabric, T* is the "effective thickness" of the fabric and equal to the ratio of the areal density of the fabric to the fiber density, h is the displacement of the fabric on the axis of impact and R is the radius of the fabric deformation or "bulge". Ballistic tests against Zylon^TM fabric have been performed to measure h and R as a function of time. The results of these experiments are presented and analyzed in the context of the Walker model. Walker (1999), Proceedings of the 18th International Symposium on Ballistics, pp. 1231.

  17. Process modelling for Space Station experiments

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Rosenberger, Franz; Nadarajah, Arunan; Ouazzani, Jalil; Amiroudine, Sakir

    1990-01-01

    Examined here is the sensitivity of a variety of space experiments to residual accelerations. In all the cases discussed the sensitivity is related to the dynamic response of a fluid. In some cases the sensitivity can be defined by the magnitude of the response of the velocity field. This response may involve motion of the fluid associated with internal density gradients, or the motion of a free liquid surface. For fluids with internal density gradients, the type of acceleration to which the experiment is sensitive will depend on whether buoyancy driven convection must be small in comparison to other types of fluid motion, or fluid motion must be suppressed or eliminated. In the latter case, the experiments are sensitive to steady and low frequency accelerations. For experiments such as the directional solidification of melts with two or more components, determination of the velocity response alone is insufficient to assess the sensitivity. The effect of the velocity on the composition and temperature field must be considered, particularly in the vicinity of the melt-crystal interface. As far as the response to transient disturbances is concerned, the sensitivity is determined by both the magnitude and frequency of the acceleration and the characteristic momentum and solute diffusion times. The microgravity environment, a numerical analysis of low gravity tolerance of the Bridgman-Stockbarger technique, and modeling crystal growth by physical vapor transport in closed ampoules are discussed.

  18. What determines fresh fish consumption in Croatia?

    PubMed

    Tomić, Marina; Matulić, Daniel; Jelić, Margareta

    2016-11-01

    Although fresh fish is widely available, consumption still remains below the recommended intake levels among the majority of European consumers. The economic crisis affects consumer food behaviour, therefore fresh fish is perceived as healthy but expensive food product. The aim of this study was to determine the factors influencing fresh fish consumption using an expanded Theory of Planned Behaviour (Ajzen, 1991) as a theoretical framework. The survey was conducted on a heterogeneous sample of 1151 Croatian fresh fish consumers. The study investigated the relationship between attitudes, perceived behavioural control, subjective norm, moral obligation, involvement in health, availability, intention and consumption of fresh fish. Structural Equation Modeling by Partial Least Squares was used to analyse the collected data. The results indicated that attitudes are the strongest positive predictor of the intention to consume fresh fish. Other significant predictors of the intention to consume fresh fish were perceived behavioural control, subjective norm, health involvement and moral obligation. The intention to consume fresh fish showed a strong positive correlation with behaviour. This survey provides valuable information for food marketing professionals and for the food industry in general. PMID:26721719

  19. How to teach friction: Experiments and models

    NASA Astrophysics Data System (ADS)

    Besson, Ugo; Borghi, Lidia; De Ambrosis, Anna; Mascheretti, Paolo

    2007-12-01

    Students generally have difficulty understanding friction and its associated phenomena. High school and introductory college-level physics courses usually do not give the topic the attention it deserves. We have designed a sequence for teaching about friction between solids based on a didactic reconstruction of the relevant physics, as well as research findings about student conceptions. The sequence begins with demonstrations that illustrate different types of friction. Experiments are subsequently performed to motivate students to obtain quantitative relations in the form of phenomenological laws. To help students understand the mechanisms producing friction, models illustrating the processes taking place on the surface of bodies in contact are proposed.

  20. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  1. Experience with the CMS Event Data Model

    SciTech Connect

    Elmer, P.; Hegner, B.; Sexton-Kennedy, L.; /Fermilab

    2009-06-01

    The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational experience with the chosen model. We will present some of our findings, and attempt to evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the level of adoption throughout CMS, which spans the trigger farm up to the final physics analysis. Future plans, in particular dealing with schema evolution and scaling, will be discussed briefly.

  2. Application of non-linear models to predict inhibition effects of various plant hydrosols on Listeria monocytogenes inoculated on fresh-cut apples.

    PubMed

    Ozturk, Ismet; Tornuk, Fatih; Sagdic, Osman; Kisi, Ozgur

    2012-07-01

    In this study, we studied the effects of some plant hydrosols obtained from bay leaf, black cumin, rosemary, sage, and thyme in reducing Listeria monocytogenes on the surface of fresh-cut apple cubes. Adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN), and multiple linear regression (MLR) models were used for describing the behavior of L. monocytogenes against the hydrosol treatments. Approximately 1-1.5 log CFU/g decreases in L. monocytogenes counts were observed after individual hydrosol treatments for 20 min. By extending the treatment time to 60 min, thyme, sage, or rosemary hydrosols eliminated L. monocytogenes, whereas black cumin and bay leaf hydrosols did not lead to additional reductions. In addition to antibacterial measurements, the abilities of ANFIS, ANN, and MLR models were compared with respect to estimation of the survival of L. monocytogenes. The root mean square error, mean absolute error, and determination coefficient statistics were used as comparison criteria. The comparison results indicated that the ANFIS model performed the best for estimating the effects of the plant hydrosols on L. monocytogenes counts. The ANN model was also effective; the MLR model was found to be poor at estimating L. monocytogenes numbers.

  3. Application of non-linear models to predict inhibition effects of various plant hydrosols on Listeria monocytogenes inoculated on fresh-cut apples.

    PubMed

    Ozturk, Ismet; Tornuk, Fatih; Sagdic, Osman; Kisi, Ozgur

    2012-07-01

    In this study, we studied the effects of some plant hydrosols obtained from bay leaf, black cumin, rosemary, sage, and thyme in reducing Listeria monocytogenes on the surface of fresh-cut apple cubes. Adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN), and multiple linear regression (MLR) models were used for describing the behavior of L. monocytogenes against the hydrosol treatments. Approximately 1-1.5 log CFU/g decreases in L. monocytogenes counts were observed after individual hydrosol treatments for 20 min. By extending the treatment time to 60 min, thyme, sage, or rosemary hydrosols eliminated L. monocytogenes, whereas black cumin and bay leaf hydrosols did not lead to additional reductions. In addition to antibacterial measurements, the abilities of ANFIS, ANN, and MLR models were compared with respect to estimation of the survival of L. monocytogenes. The root mean square error, mean absolute error, and determination coefficient statistics were used as comparison criteria. The comparison results indicated that the ANFIS model performed the best for estimating the effects of the plant hydrosols on L. monocytogenes counts. The ANN model was also effective; the MLR model was found to be poor at estimating L. monocytogenes numbers. PMID:22690764

  4. Decadal predictability of extreme fresh water export events from the Arctic Ocean into the Nordic Seas and subpolar North Atlantic

    NASA Astrophysics Data System (ADS)

    Schmith, Torben; Olsen, Steffen M.; Ringgaard, Ida M.; May, Wilhelm

    2016-04-01

    Abrupt fresh water releases originating in the Arctic Ocean have been documented to affect ocean circulation and climate in the North Atlantic area. Therefore, in this study, we investigate prospects for predicting such events up to one decade ahead. This is done in a perfect model setup by a combination of analyzing a 500 year control experiment and dedicated ensemble experiment aimed at predicting selected 10 year long segments of the control experiment. The selected segments are characterized by a large positive or negative trend in the total fresh water content in the Arctic Ocean. The analysis of the components (liquid fresh water and sea ice) reveals that they develop in a near random walk manner. From this we conclude that the main mechanism is integration of fresh water in the Beaufort Gyre through Ekman pumping from the randomly varying atmosphere. Therefore, the predictions from the ensemble experiments are on average not better than a damped persistence predictions. By running two different families of ensemble predictions, one starting from the 'observed' ocean globally, and one starting from climatology in the Arctic Ocean and from the observed ocean elsewhere, we conclude that the former outperforms the latter for the first few years as regards liquid fresh water and for the first year as regards sea ice. Analysis of the model experiments in terms of the fresh water export from the Arctic Ocean into Nordic seas and the subpolar North Atlantic reveals a very modest potential for predictability.

  5. Microbial Successions Are Associated with Changes in Chemical Profiles of a Model Refrigerated Fresh Pork Sausage during an 80-Day Shelf Life Study

    PubMed Central

    David, Jairus R. D.; Gilbreth, Stefanie Evans; Smith, Gordon; Nietfeldt, Joseph; Legge, Ryan; Kim, Jaehyoung; Sinha, Rohita; Duncan, Christopher E.; Ma, Junjie; Singh, Indarpal

    2014-01-01

    Fresh pork sausage is produced without a microbial kill step and therefore chilled or frozen to control microbial growth. In this report, the microbiota in a chilled fresh pork sausage model produced with or without an antimicrobial combination of sodium lactate and sodium diacetate was studied using a combination of traditional microbiological methods and deep pyrosequencing of 16S rRNA gene amplicons. In the untreated system, microbial populations rose from 102 to 106 CFU/g within 15 days of storage at 4°C, peaking at nearly 108 CFU/g by day 30. Pyrosequencing revealed a complex community at day 0, with taxa belonging to the Bacilli, Gammaproteobacteria, Betaproteobacteria, Actinobacteria, Bacteroidetes, and Clostridia. During storage at 4°C, the untreated system displayed a complex succession, with species of Weissella and Leuconostoc that dominate the product at day 0 being displaced by species of Pseudomonas (P. lini and P. psychrophila) within 15 days. By day 30, a second wave of taxa (Lactobacillus graminis, Carnobacterium divergens, Buttiauxella brennerae, Yersinia mollaretti, and a taxon of Serratia) dominated the population, and this succession coincided with significant chemical changes in the matrix. Treatment with lactate-diacetate altered the dynamics dramatically, yielding a monophasic growth curve of a single species of Lactobacillus (L. graminis), followed by a uniform selective die-off of the majority of species in the population. Of the six species of Lactobacillus that were routinely detected, L. graminis became the dominant member in all samples, and its origins were traced to the spice blend used in the formulation. PMID:24928886

  6. Microbial successions are associated with changes in chemical profiles of a model refrigerated fresh pork sausage during an 80-day shelf life study.

    PubMed

    Benson, Andrew K; David, Jairus R D; Gilbreth, Stefanie Evans; Smith, Gordon; Nietfeldt, Joseph; Legge, Ryan; Kim, Jaehyoung; Sinha, Rohita; Duncan, Christopher E; Ma, Junjie; Singh, Indarpal

    2014-09-01

    Fresh pork sausage is produced without a microbial kill step and therefore chilled or frozen to control microbial growth. In this report, the microbiota in a chilled fresh pork sausage model produced with or without an antimicrobial combination of sodium lactate and sodium diacetate was studied using a combination of traditional microbiological methods and deep pyrosequencing of 16S rRNA gene amplicons. In the untreated system, microbial populations rose from 10(2) to 10(6) CFU/g within 15 days of storage at 4°C, peaking at nearly 10(8) CFU/g by day 30. Pyrosequencing revealed a complex community at day 0, with taxa belonging to the Bacilli, Gammaproteobacteria, Betaproteobacteria, Actinobacteria, Bacteroidetes, and Clostridia. During storage at 4°C, the untreated system displayed a complex succession, with species of Weissella and Leuconostoc that dominate the product at day 0 being displaced by species of Pseudomonas (P. lini and P. psychrophila) within 15 days. By day 30, a second wave of taxa (Lactobacillus graminis, Carnobacterium divergens, Buttiauxella brennerae, Yersinia mollaretti, and a taxon of Serratia) dominated the population, and this succession coincided with significant chemical changes in the matrix. Treatment with lactate-diacetate altered the dynamics dramatically, yielding a monophasic growth curve of a single species of Lactobacillus (L. graminis), followed by a uniform selective die-off of the majority of species in the population. Of the six species of Lactobacillus that were routinely detected, L. graminis became the dominant member in all samples, and its origins were traced to the spice blend used in the formulation.

  7. Modeling of the Princeton Raman Amplification Experiment

    NASA Astrophysics Data System (ADS)

    Hur, Min Sup; Lindberg, Ryan; Wurtele, Jonathan; Cheng, W.; Avitzour, Y.; Ping, Y.; Suckewer, S.; Fisch, N. J.

    2004-11-01

    We numerically model the Princeton experiments on Raman amplification [1] using averaged-PIC (aPIC) [2] and 3-wave codes. Recently, there has been a series of experimental results performed in Princeton University [3]. Amplification factors up to 500 in intensity were obtained using a subpicosecond pulse propagating in a 3mm plasma. The plasma was created using a gas jet. The intensity of the amplified pulse exceeds the intensity of the pump pulse, indicating that the process of Raman amplification is in the nonlinear regime. Comparisons are made between 3-wave models, kinetic models and the experimental data. It is found that better agreement is achieved when kinetic effects are included. Discussion of the influence of a range of potentially deleterious phenomena such as density inhomogeneity, particle trapping, ionization, and pump depletion by noise amplification will be examined. References [1] V.M. Malkin, G. Shvets, and N.J. Fisch, Phys. Rev. Lett. vol.82, 1999. [2] M.S. Hur, G. Penn, J.S. Wurtele, R. Lindberg, to appear in Phys. Plasmas. [3] Y. Ping, et al., Phys. Rev. E vol. 66, 2002; Phys. Rev. E vol. 67, 2003;

  8. Predictive modeling for growth of non- and cold-adapted Listeria Monocytogenes on fresh-cut cantaloupe at different storage temperatures

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The aim of this study was to determine the growth kinetics of Listeria monocytogenes, with and without cold-adaption, on fresh-cut cantaloupe under different storage temperatures. Fresh-cut samples, spot inoculated with a four-strain cocktail of L. monocytogenes (about 3.2 log CFU/g), were exposed t...

  9. Experiments on a hurricane windmill model

    NASA Astrophysics Data System (ADS)

    Bolie, V. W.

    1981-08-01

    Airflow tests of a vertical-axis wind turbine model were performed to establish accurate endpoints for the curve of trans-rotor pressure vs trans-rotor flow rate. Calibrated free-field flow tests at wind speeds up to 25 m/s, and corroborating experiments using tufted-yarn flow tracers were performed, with the latter showing smoother flows at higher wind speeds. The rotor was also replaced by a close-fitting weighted solid disk to measure maximum available trans-orifice pressure drop. Results indicate that the vertical-axis turbines are superior in terms of simplicity, TV interference, and safety enclosed rotor blades, while producing the same amount of power as conventional windmills. Economically, however, the design would not be competitive in terms of dollars/kW/yr.

  10. Full-Scale Cookoff Model Validation Experiments

    SciTech Connect

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  11. Nanofluid Drop Evaporation: Experiment, Theory, and Modeling

    NASA Astrophysics Data System (ADS)

    Gerken, William James

    Nanofluids, stable colloidal suspensions of nanoparticles in a base fluid, have potential applications in the heat transfer, combustion and propulsion, manufacturing, and medical fields. Experiments were conducted to determine the evaporation rate of room temperature, millimeter-sized pendant drops of ethanol laden with varying amounts (0-3% by weight) of 40-60 nm aluminum nanoparticles (nAl). Time-resolved high-resolution drop images were collected for the determination of early-time evaporation rate (D2/D 02 > 0.75), shown to exhibit D-square law behavior, and surface tension. Results show an asymptotic decrease in pendant drop evaporation rate with increasing nAl loading. The evaporation rate decreases by approximately 15% at around 1% to 3% nAl loading relative to the evaporation rate of pure ethanol. Surface tension was observed to be unaffected by nAl loading up to 3% by weight. A model was developed to describe the evaporation of the nanofluid pendant drops based on D-square law analysis for the gas domain and a description of the reduction in liquid fraction available for evaporation due to nanoparticle agglomerate packing near the evaporating drop surface. Model predictions are in relatively good agreement with experiment, within a few percent of measured nanofluid pendant drop evaporation rate. The evaporation of pinned nanofluid sessile drops was also considered via modeling. It was found that the same mechanism for nanofluid evaporation rate reduction used to explain pendant drops could be used for sessile drops. That mechanism is a reduction in evaporation rate due to a reduction in available ethanol for evaporation at the drop surface caused by the packing of nanoparticle agglomerates near the drop surface. Comparisons of the present modeling predictions with sessile drop evaporation rate measurements reported for nAl/ethanol nanofluids by Sefiane and Bennacer [11] are in fairly good agreement. Portions of this abstract previously appeared as: W. J

  12. Fresh embryo donation for human embryonic stem cell (hESC) research: the experiences and values of IVF couples asked to be embryo donors

    PubMed Central

    Haimes, E.; Taylor, K.

    2009-01-01

    BACKGROUND This article reports on an investigation of the views of IVF couples asked to donate fresh embryos for research and contributes to the debates on: the acceptability of human embryonic stem cell (hESC) research, the moral status of the human embryo and embryo donation for research. METHODS A hypothesis-generating design was followed. All IVF couples in one UK clinic who were asked to donate embryos in 1 year were contacted 6 weeks after their pregnancy result. Forty four in-depth interviews were conducted. RESULTS Interviewees were preoccupied with IVF treatment and the request to donate was a secondary consideration. They used a complex and dynamic system of embryo classification. Initially, all embryos were important but then their focus shifted to those that had most potential to produce a baby. At that point, ‘other’ embryos were less important though they later realise that they did not know what happened to them. Guessing that these embryos went to research, interviewees preferred not to contemplate what that might entail. The embryos that caused interviewees most concern were good quality embryos that might have produced a baby but went to research instead. ‘The’ embryo, the morally laden, but abstract, entity, did not play a central role in their decision-making. CONCLUSIONS This study, despite missing those who refuse to donate embryos, suggests that debates on embryo donation for hESC research should include the views of embryo donors and should consider the social, as well as the moral, status of the human embryo. PMID:19502616

  13. Vacuum membrane distillation: Experiments and modeling

    SciTech Connect

    Bandini, S.; Saavedra, A.; Sarti, G.C.

    1997-02-01

    Vacuum membrane distillation is a membrane-based separation process considered here to remove volatile organic compounds from aqueous streams. Microporous hydrophobic membranes are used to separate the aqueous stream from a gas phase kept under vacuum. The evaporation of the liquid stream takes place on one side of the membrane, and mass transfer occurs through the vapor phase inside the membrane. The role of operative conditions on the process performance is widely investigated in the case of dilute binary aqueous mixtures containing acetone, ethanol, isopropanol, ethylacetate, methylacetate, or methylterbutyl ether. Temperature, composition, flow rate of the liquid feed, and pressure downstream the membrane are the main operative variables. Among these, the vacuum-side pressure is the major design factor since it greatly affects the separation efficiency. A mathematical model description of the process is developed, and the results are compared with the experiments. The model is finally used to predict the best operative conditions in which the process can work for the case of benzene removal from waste waters.

  14. Ringing load models verified against experiments

    SciTech Connect

    Krokstad, J.R.; Stansberg, C.T.

    1995-12-31

    What is believed to be the main reason for discrepancies between measured and simulated loads in previous studies has been assessed. One has focused on the balance between second- and third-order load components in relation to what is called ``fat body`` load correction. It is important to understand that the use of Morison strip theory in combination with second-order wave theory give rise to second- as well as third-order components in the horizontal force. A proper balance between second- and third-order components in horizontal force is regarded as the most central requirements for a sufficient accurate ringing load model in irregular sea. It is also verified that simulated second-order components are largely overpredicted both in regular and irregular seas. Nonslender diffraction effects are important to incorporate in the FNV formulation in order to reduce the simulated second-order component and to match experiments more closely. A sufficient accurate ringing simulation model with the use of simplified methods is shown to be within close reach. Some further development and experimental verification must however be performed in order to take non-slender effects into account.

  15. Magnitudes and sources of dissolved inorganic phosphorus inputs to surface fresh waters and the coastal zone: A new global model

    NASA Astrophysics Data System (ADS)

    Harrison, John A.; Bouwman, A. F.; Mayorga, Emilio; Seitzinger, Sybil

    2010-03-01

    As a limiting nutrient in aquatic systems, phosphorus (P) plays an important role in controlling freshwater and coastal primary productivity and ecosystem dynamics, increasing frequency and severity of harmful and nuisance algae blooms and hypoxia, as well as contributing to loss of biodiversity. Although dissolved inorganic P (DIP) often constitutes a relatively small fraction of the total P pool in aquatic systems, its bioavailability makes it an important determinant of ecosystem function. Here we describe, apply, evaluate, and interpret an enhanced version of the Global Nutrient Export from Watersheds (NEWS)-DIP model: NEWS-DIP-Half Degree (NEWS-DIP-HD). Improvements to NEWS-DIP-HD over the original NEWS DIP model include (1) the preservation of spatial resolution of input data sets at the 0.5 degree level and (2) explicit downstream routing of water and DIP from half-degree cell to half-degree cell using a global flow-direction representation. NEWS-DIP explains 78% and 62% of the variability in per-basin DIP export (DIP load) for U.S. Geological Survey (USGS) and global stations, respectively, similar to the original NEWS-DIP model and somewhat more than other global models of DIP loading and export. NEWS-DIP-HD output suggests that hot spots for DIP loading tend to occur in urban centers, with the highest per-area rate of DIP loading predicted for the half-degree grid cell containing Tokyo (6366 kg P km-2 yr-1). Furthermore, cities with populations >100,000 accounted for 35% of global surface water DIP loading while covering less than 2% of global land surface area. NEWS-DIP-HD also indicates that humans supply more DIP to surface waters than natural weathering over the majority (53%) of the Earth's land surface, with a much larger area dominated by DIP point sources than nonpoint sources (52% versus 1% of the global land surface, respectively). NEWS-DIP-HD also suggests that while humans had increased DIP input to surface waters more than fourfold globally

  16. Wake Vortex Encounter Model Validation Experiments

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan; Brandon, Jay; Greene, George C.; Rivers, Robert; Shah, Gautam; Stewart, Eric; Stuever, Robert; Rossow, Vernon

    1997-01-01

    The goal of this current research is to establish a database that validate/calibrate wake encounter analysis methods for fleet-wide application; and measure/document atmospheric effects on wake decay. Two kinds of experiments, wind tunnel experiments and flight experiments, are performed. This paper discusses the different types of tests and compares their wake velocity measurement.

  17. Braiding DNA: Experiments, Simulations, and Models

    PubMed Central

    Charvin, G.; Vologodskii, A.; Bensimon, D.; Croquette, V.

    2005-01-01

    DNA encounters topological problems in vivo because of its extended double-helical structure. As a consequence, the semiconservative mechanism of DNA replication leads to the formation of DNA braids or catenanes, which have to be removed for the completion of cell division. To get a better understanding of these structures, we have studied the elastic behavior of two braided nicked DNA molecules using a magnetic trap apparatus. The experimental data let us identify and characterize three regimes of braiding: a slightly twisted regime before the formation of the first crossing, followed by genuine braids which, at large braiding number, buckle to form plectonemes. Two different approaches support and quantify this characterization of the data. First, Monte Carlo (MC) simulations of braided DNAs yield a full description of the molecules' behavior and their buckling transition. Second, modeling the braids as a twisted swing provides a good approximation of the elastic response of the molecules as they are intertwined. Comparisons of the experiments and the MC simulations with this analytical model allow for a measurement of the diameter of the braids and its dependence upon entropic and electrostatic repulsive interactions. The MC simulations allow for an estimate of the effective torsional constant of the braids (at a stretching force F = 2 pN): Cb ∼ 48 nm (as compared with C ∼100 nm for a single unnicked DNA). Finally, at low salt concentrations and for sufficiently large number of braids, the diameter of the braided molecules is observed to collapse to that of double-stranded DNA. We suggest that this collapse is due to the partial melting and fraying of the two nicked molecules and the subsequent right- or left-handed intertwining of the stretched single strands. PMID:15778439

  18. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  19. Developing a Model using High School Students for Restoring, Monitoring and Conducting Research in Fresh Water Wetlands

    NASA Astrophysics Data System (ADS)

    Blueford, J. R.

    2010-12-01

    Tule Ponds at Tyson Lagoon in eastern San Francisco Bay is one of the largest sag ponds created by the Hayward Fault that has not been destroyed by urbanization. In the 1990’s Alameda County Flood Control and Water Conservation District designed a constructed wetland to naturally filter stormwater before it entered Tyson Lagoon on its way to the San Francisco Bay. The Math Science Nucleus, a non profit organization, manages the facility that incorporates high school students through community service, service learning, and research. Students do a variety of tasks from landscaping to scientific monitoring. Through contracts and grants, we create different levels of competency that the students can participate. Engineers and scientists from the two agencies involved, create tasks that are needed to be complete for successful restoration. Every year the students work on different components of restoration. A group of select student interns (usually juniors and seniors) collects and records the data during the year. Some of these students are part of a paid internship to insure their regular attendance. Every year the students compile and discuss with scientists from the Math Science Nucleus what the data set might mean and how problems can be improved. The data collected helps determine other longer term projects. This presentation will go over the journey of the last 10 years to this very successful program and will outline the steps necessary to maintain a restoration project. It will also outline the different groups that do larger projects (scouts) and liaisons with schools that allow teachers to assign projects at our facility. The validity of the data obtained by students and how we standardize our data collection from soil analysis, water chemistry, monitoring faults, and biological observations will be discussed. This joint agency model of cooperation to provide high school students with a real research opportunity has benefits that allow the program to

  20. What Is the True Color of Fresh Meat? A Biophysical Undergraduate Laboratory Experiment Investigating the Effects of Ligand Binding on Myoglobin Using Optical, EPR, and NMR Spectroscopy

    ERIC Educational Resources Information Center

    Linenberger, Kimberly; Bretz, Stacey Lowery; Crowder, Michael W.; McCarrick, Robert; Lorigan, Gary A.; Tierney, David L.

    2011-01-01

    With an increased focus on integrated upper-level laboratories, we present an experiment integrating concepts from inorganic, biological, and physical chemistry content areas. Students investigate the effects of ligand strength on the spectroscopic properties of the heme center in myoglobin using UV-vis, [superscript 1]H NMR, and EPR…

  1. Reactive transport modeling to quantify trace element release into fresh groundwater in case of CO2 leak from deep geological storage.

    NASA Astrophysics Data System (ADS)

    Lions, J.; Jakymiw, C.; Devau, N.; Barsotti, V.; Humez, P.

    2014-12-01

    Geological storage of CO2 in deep saline aquifers is one of the options considered for the mitigation of CO2 emissions into the atmosphere. A deep geological CO2 storage is not expected to leak but potential impacts on groundwater have to be studied. A better understanding on how it could affect groundwater quality, aquifer minerals and trace elements is necessary to characterize a future storage site. Moreover, monitoring and remediation solutions have to be evaluated before storage operations. As part of the ANR project CIPRES, we present here reactive transport works. In a 3D model using ToughReact v.2, we perform different CO2 leakage scenarios in a confined aquifer, considering CO2 gas leakage. The model is based on the Albian aquifer, a strategic water resource. It takes into account groundwater and rock chemistry of the Albian green sand layer (Quartz, Glauconite, Kaolinite) at 700 m deep. The geochemical model was elaborated from experimental data. The aquifer consists in a mesh, divided roughly in 20000 cells making a 60 m thick and a 500 m large layer. Furthermore, cells are subdivided near the leakage point to consider local phenomena (secondary precipitation, sorption/desorption...). The chemical model takes into account kinetics for mineral dissolution, ion exchange and surface complexation. We highlight the importance of sorption processes on trace element transport (As, Zn and Ni) in fresh groundwater. Moreover, we distinguish different geochemical behavior (CO2 plume shape, secondary precipitation, desorption...) according to different horizontal flow rates influenced by the hydrodynamics (regional gradient). Understanding how geochemical processes and regional flows influence water chemistry, allows to ascertain measurement monitoring and verification plan and remediation works in case of leak considering a given location.

  2. Fresh frozen plasma resuscitation provides neuroprotection compared to normal saline in a large animal model of traumatic brain injury and polytrauma.

    PubMed

    Imam, Ayesha; Jin, Guang; Sillesen, Martin; Dekker, Simone E; Bambakidis, Ted; Hwabejire, John O; Jepsen, Cecilie H; Halaweish, Ihab; Alam, Hasan B

    2015-03-01

    We have previously shown that early treatment with fresh frozen plasma (FFP) is neuroprotective in a swine model of hemorrhagic shock (HS) and traumatic brain injury (TBI). However, it remains unknown whether this strategy would be beneficial in a more clinical polytrauma model. Yorkshire swine (42-50 kg) were instrumented to measure hemodynamic parameters, brain oxygenation, and intracranial pressure (ICP) and subjected to computer-controlled TBI and multi-system trauma (rib fracture, soft-tissue damage, and liver injury) as well as combined free and controlled hemorrhage (40% blood volume). After 2 h of shock (mean arterial pressure, 30-35 mm Hg), animals were resuscitated with normal saline (NS; 3×volume) or FFP (1×volume; n=6/group). Six hours postresuscitation, brains were harvested and lesion size and swelling were evaluated. Levels of endothelial-derived vasodilator endothelial nitric oxide synthase (eNOS) and vasoconstrictor endothelin-1 (ET-1) were also measured. FFP resuscitation was associated with reduced brain lesion size (1005.8 vs. 2081.9 mm(3); p=0.01) as well as swelling (11.5% vs. 19.4%; p=0.02). Further, FFP-resuscitated animals had higher brain oxygenation as well as cerebral perfusion pressures. Levels of cerebral eNOS were higher in the FFP-treated group (852.9 vs. 816.4 ng/mL; p=0.03), but no differences in brain levels of ET-1 were observed. Early administration of FFP is neuroprotective in a complex, large animal model of polytrauma, hemorrhage, and TBI. This is associated with a favorable brain oxygenation and cerebral perfusion pressure profile as well as higher levels of endothelial-derived vasodilator eNOS, compared to normal saline resuscitation.

  3. Development of FT-NIR models for the simultaneous estimation of chlorophyll and nitrogen content in fresh apple (Malus domestica) leaves.

    PubMed

    Tamburini, Elena; Ferrari, Giuseppe; Marchetti, Maria Gabriella; Pedrini, Paola; Ferro, Sergio

    2015-01-01

    Agricultural practices determine the level of food production and, to great extent, the state of the global environment. During the last decades, the indiscriminate recourse to fertilizers as well as the nitrogen losses from land application have been recognized as serious issues of modern agriculture, globally contributing to nitrate pollution. The development of a reliable Near-Infra-Red Spectroscopy (NIRS)-based method, for the simultaneous monitoring of nitrogen and chlorophyll in fresh apple (Malus domestica) leaves, was investigated on a set of 133 samples, with the aim of estimating the nutritional and physiological status of trees, in real time, cheaply and non-destructively. By means of a FT (Fourier Transform)-NIR instrument, Partial Least Squares (PLS) regression models were developed, spanning a concentration range of 0.577%-0.817% for the total Kjeldahl nitrogen (TKN) content (R2 = 0.983; SEC = 0.012; SEP = 0.028), and of 1.534-2.372 mg/g for the total chlorophyll content (R2 = 0.941; SEC = 0.132; SEP = 0.162). Chlorophyll-a and chlorophyll-b contents were also evaluated (R2 = 0.913; SEC = 0.076; SEP = 0.101 and R2 = 0.899; SEC = 0.059; SEP = 0.101, respectively). All calibration models were validated by means of 47 independent samples. The NIR approach allows a rapid evaluation of the nitrogen and chlorophyll contents, and may represent a useful tool for determining nutritional and physiological status of plants, in order to allow a correction of nutrition programs during the season. PMID:25629703

  4. Development of FT-NIR Models for the Simultaneous Estimation of Chlorophyll and Nitrogen Content in Fresh Apple (Malus Domestica) Leaves

    PubMed Central

    Tamburini, Elena; Ferrari, Giuseppe; Marchetti, Maria Gabriella; Pedrini, Paola; Ferro, Sergio

    2015-01-01

    Agricultural practices determine the level of food production and, to great extent, the state of the global environment. During the last decades, the indiscriminate recourse to fertilizers as well as the nitrogen losses from land application have been recognized as serious issues of modern agriculture, globally contributing to nitrate pollution. The development of a reliable Near-Infra-Red Spectroscopy (NIRS)-based method, for the simultaneous monitoring of nitrogen and chlorophyll in fresh apple (Malus domestica) leaves, was investigated on a set of 133 samples, with the aim of estimating the nutritional and physiological status of trees, in real time, cheaply and non-destructively. By means of a FT (Fourier Transform)-NIR instrument, Partial Least Squares (PLS) regression models were developed, spanning a concentration range of 0.577%–0.817% for the total Kjeldahl nitrogen (TKN) content (R2 = 0.983; SEC = 0.012; SEP = 0.028), and of 1.534–2.372 mg/g for the total chlorophyll content (R2 = 0.941; SEC = 0.132; SEP = 0.162). Chlorophyll-a and chlorophyll-b contents were also evaluated (R2 = 0.913; SEC = 0.076; SEP = 0.101 and R2 = 0.899; SEC = 0.059; SEP = 0.101, respectively). All calibration models were validated by means of 47 independent samples. The NIR approach allows a rapid evaluation of the nitrogen and chlorophyll contents, and may represent a useful tool for determining nutritional and physiological status of plants, in order to allow a correction of nutrition programs during the season. PMID:25629703

  5. Development of FT-NIR models for the simultaneous estimation of chlorophyll and nitrogen content in fresh apple (Malus domestica) leaves.

    PubMed

    Tamburini, Elena; Ferrari, Giuseppe; Marchetti, Maria Gabriella; Pedrini, Paola; Ferro, Sergio

    2015-01-26

    Agricultural practices determine the level of food production and, to great extent, the state of the global environment. During the last decades, the indiscriminate recourse to fertilizers as well as the nitrogen losses from land application have been recognized as serious issues of modern agriculture, globally contributing to nitrate pollution. The development of a reliable Near-Infra-Red Spectroscopy (NIRS)-based method, for the simultaneous monitoring of nitrogen and chlorophyll in fresh apple (Malus domestica) leaves, was investigated on a set of 133 samples, with the aim of estimating the nutritional and physiological status of trees, in real time, cheaply and non-destructively. By means of a FT (Fourier Transform)-NIR instrument, Partial Least Squares (PLS) regression models were developed, spanning a concentration range of 0.577%-0.817% for the total Kjeldahl nitrogen (TKN) content (R2 = 0.983; SEC = 0.012; SEP = 0.028), and of 1.534-2.372 mg/g for the total chlorophyll content (R2 = 0.941; SEC = 0.132; SEP = 0.162). Chlorophyll-a and chlorophyll-b contents were also evaluated (R2 = 0.913; SEC = 0.076; SEP = 0.101 and R2 = 0.899; SEC = 0.059; SEP = 0.101, respectively). All calibration models were validated by means of 47 independent samples. The NIR approach allows a rapid evaluation of the nitrogen and chlorophyll contents, and may represent a useful tool for determining nutritional and physiological status of plants, in order to allow a correction of nutrition programs during the season.

  6. Modeling of E-164X Experiment

    SciTech Connect

    Deng, S.; Muggli, P.; Katsouleas, T.; Oz, E.; Barnes, C.D.; Decker, F.J.; Hogan, M.J.; Iverson, R.; Krejcik, P.; O'Connell, C.; Clayton, C.E.; Huang, C.; Johnson, D.K.; Joshi, C.; Lu, W.; Marsh, K.A.; Mori, W.B.; Tsung, F.; Zhou, M.M.; Fonseca, R.A.

    2004-12-07

    In current plasma-based accelerator experiments, very short bunches (100-150{mu}m for E164 and 10-20 {mu}m for E164X experiment at Stanford Linear Accelerator Center (SLAC)) are used to drive plasma wakes and achieve high accelerating gradients, on the order of 10-100GV/m. The self-fields of such intense bunches can tunnel ionize neutral gases and create the plasma. This may completely change the physics of plasma wakes. A 3-D object-oriented fully parallel PIC code OSIRIS is used to simulate various gas types, beam parameters, etc. to support the design of the experiments. The simulation results for real experiment parameters are presented.

  7. Modeling of E-164X Experiment

    SciTech Connect

    Deng, S.; Muggli, P.; Barnes, C.D.; Clayton, C.E.; Decker, F.J.; Fonseca, R.A.; Huang, C.; Hogan, M.J.; Iverson, R.; Johnson, D.K.; Joshi, C.; Katsouleas, T.; Krejcik, P.; Lu, W.; Marsh, K.A.; Mori, W.B.; O'Connell, C.; Oz, E.; Tsung, F.; Zhou, M.M.; /Southern California U. /UCLA /SLAC /Lisbon, IST

    2005-06-28

    In current plasma-based accelerator experiments, very short bunches (100-150 {micro}m for E164 [1] and 10-20 {micro}m for E164X [2] experiment at Stanford Linear Accelerator Center (SLAC)) are used to drive plasma wakes and achieve high accelerating gradients, on the order of 10-100GV/m. The self-fields of such intense bunches can tunnel ionize neutral gases and create the plasma [3,4]. This may completely change the physics of plasma wakes. A 3-D object-oriented fully parallel PIC code OSIRIS [5] is used to simulate various gas types, beam parameters, etc. to support the design of the experiments. The simulation results for real experiment parameters are presented.

  8. Modelling an IHE experiment with a suite of DSD models

    NASA Astrophysics Data System (ADS)

    Hodgson, A. N.

    2014-05-01

    At the 2011 APS conference, Terrones, Burkett and Morris published an experiment primarily designed to allow examination of the propagation of a detonation front in a 3-dimensional charge of PBX9502 insensitive high explosive (IHE). The charge is confined by a cylindrical steel shell, has an elliptical tin liner, and is line-initiated along its length. The detonation wave must propagate around the inner hollow region and converge on the opposite side. The Detonation Shock Dynamics (DSD) model allows for the calculation of detonation propagation in a region of explosive using a selection of material input parameters, amongst which is the D(K) relation that governs how the local detonation velocity varies as a function of wave curvature. In this paper, experimental data are compared to calculations using the 2D DSD and newly-developed 3D DSD codes at AWE with a variety of D(K) relations.

  9. Collaborative Project: Understanding the Chemical Processes tat Affect Growth rates of Freshly Nucleated Particles

    SciTech Connect

    McMurry, Peter; Smuth, James

    2015-11-12

    This final technical report describes our research activities that have, as the ultimate goal, the development of a model that explains growth rates of freshly nucleated particles. The research activities, which combine field observations with laboratory experiments, explore the relationship between concentrations of gas-phase species that contribute to growth and the rates at which those species are taken up. We also describe measurements of the chemical composition of freshly nucleated particles in a variety of locales, as well as properties (especially hygroscopicity) that influence their effects on climate.

  10. “What Fresh Hell Is This?” Victims of Intimate Partner Violence Describe Their Experiences of Abuse, Pain, and Depression

    PubMed Central

    Cerulli, Catherine; Poleshuck, Ellen; Raimondi, Christina; Veale, Stephanie; Chin, Nancy

    2012-01-01

    Traditionally, professionals working with intimate partner violence (IPV) survivors view a victim through a disciplinary lens, examining health and safety in isolation. Using focus groups with survivors, this study explored the need to address IPV consequences with an integrated model and begin to understand the interconnectedness between violence, health, and safety. Focus group findings revealed that the inscription of pain on the body serves as a reminder of abuse, in turn triggering emotional and psychological pain and disrupting social relationships. In many cases, the physical abuse had stopped but the abuser was relentless by reminding and retraumatizing the victim repeatedly through shared parenting, prolonged court cases, etc. This increased participants’ exhaustion and frustration, making the act of daily living overwhelming. PMID:23226694

  11. Nuclear reaction modeling, verification experiments, and applications

    SciTech Connect

    Dietrich, F.S.

    1995-10-01

    This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of experiments in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.

  12. Modeling the Classic Meselson and Stahl Experiment.

    ERIC Educational Resources Information Center

    D'Agostino, JoBeth

    2001-01-01

    Points out the importance of molecular models in biology and chemistry. Presents a laboratory activity on DNA. Uses different colored wax strips to represent "heavy" and "light" DNA, cesium chloride for identification of small density differences, and three different liquids with varying densities to model gradient centrifugation. (YDS)

  13. Experience With Bayesian Image Based Surface Modeling

    NASA Technical Reports Server (NTRS)

    Stutz, John C.

    2005-01-01

    Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.

  14. Accelerating the connection between experiments and models: The FACE-MDS experience

    NASA Astrophysics Data System (ADS)

    Norby, R. J.; Medlyn, B. E.; De Kauwe, M. G.; Zaehle, S.; Walker, A. P.

    2014-12-01

    The mandate is clear for improving communication between models and experiments to better evaluate terrestrial responses to atmospheric and climatic change. Unfortunately, progress in linking experimental and modeling approaches has been slow and sometimes frustrating. Recent successes in linking results from the Duke and Oak Ridge free-air CO2 enrichment (FACE) experiments with ecosystem and land surface models - the FACE Model-Data Synthesis (FACE-MDS) project - came only after a period of slow progress, but the experience points the way to future model-experiment interactions. As the FACE experiments were approaching their termination, the FACE research community made an explicit attempt to work together with the modeling community to synthesize and deliver experimental data to benchmark models and to use models to supply appropriate context for the experimental results. Initial problems that impeded progress were: measurement protocols were not consistent across different experiments; data were not well organized for model input; and parameterizing and spinning up models that were not designed for simulating a specific site was difficult. Once these problems were worked out, the FACE-MDS project has been very successful in using data from the Duke and ORNL FACE experiment to test critical assumptions in the models. The project showed, for example, that the stomatal conductance model most widely used in models was supported by experimental data, but models did not capture important responses such as increased leaf mass per unit area in elevated CO2, and did not appropriately represent foliar nitrogen allocation. We now have an opportunity to learn from this experience. New FACE experiments that have recently been initiated, or are about to be initiated, include a eucalyptus forest in Australia; the AmazonFACE experiment in a primary, tropical forest in Brazil; and a mature oak woodland in England. Cross-site science questions are being developed that will have a

  15. Modeling the Formation of Language: Embodied Experiments

    NASA Astrophysics Data System (ADS)

    Steels, Luc

    This chapter gives an overview of different experiments that have been performed to demonstrate how a symbolic communication system, including its underlying ontology, can arise in situated embodied interactions between autonomous agents. It gives some details of the Grounded Naming Game, which focuses on the formation of a system of proper names, the Spatial Language Game, which focuses on the formation of a lexicon for expressing spatial relations as well as perspective reversal, and an Event Description Game, which concerns the expression of the role of participants in events through an emergent case grammar. For each experiment, details are provided how the symbolic system emerges, how the interaction is grounded in the world through the embodiment of the agent and its sensori-motor processing, and how concepts are formed in tight interaction with the emerging language.

  16. Fresh, Rayed Impact Crater

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-416, 9 July 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a fresh, young meteor impact crater on the martian surface. It is less than 400 meters (less than 400 yards) across. While there is no way to know the exact age of this or any other martian surface feature, the rays are very well preserved. On a planet where wind can modify surface features at the present time, a crater with rayed ejecta patterns must be very young indeed. Despite its apparent youth, the crater could still be many hundreds of thousands, if not several million, of years old. This impact scar is located within the much larger Crommelin Crater, near 5.6oN, 10.0oW. Sunlight illuminates the scene from the left.

  17. Model Experiment of Two-Dimentional Brownian Motion by Microcomputer.

    ERIC Educational Resources Information Center

    Mishima, Nobuhiko; And Others

    1980-01-01

    Describes the use of a microcomputer in studying a model experiment (Brownian particles colliding with thermal particles). A flow chart and program for the experiment are provided. Suggests that this experiment may foster a deepened understanding through mutual dialog between the student and computer. (SK)

  18. Annual modulation experiments, galactic models and WIMPs

    NASA Astrophysics Data System (ADS)

    Hudson, Robert G.

    Our task in the paper is to examine some recent experiments (in the period 1996-2002) bearing on the issue of whether there is dark matter in the universe in the form of neutralino WIMPs (weakly interacting massive particles). Our main focus is an experiment performed by the DAMA group that claims to have found an 'annual modulation signature' for the WIMP. DAMA's result has been hotly contested by two other groups, EDELWEISS and CDMS, and we study the details of the experiments performed by all three groups. Our goal is to investigate the philosophic and sociological implications of this controversy. Particularly, using an innovative theoretical strategy suggested by (Copi, C. and L. M. Krauss (2003). Comparing interaction rate detectors for weakly interacting massive particles with annual modulation detectors. Physical Review D, 67, 103 507), we suggest a new way of resolving discordant experimental data (extending a previous analysis by (Franklin, A. (2002). Selectivity and discord. Pittsburgh: University of Pittsburgh Press). In addition, we are in a position to contribute substantively to the debate between realists and constructive empiricists. Finally, from a sociological standpoint, we remark that DAMA's work has been valuable in mobilizing other research teams and providing them with a critical focus.

  19. Modelling an IHE Experiment with a Suite of DSD Models

    NASA Astrophysics Data System (ADS)

    Hodgson, Alexander

    2013-06-01

    At the 2011 APS conference, Terrones, Burkett and Morris published an experiment primarily designed to allow examination of the propagation of a detonation front in a 3-dimensional charge of PBX9502 insensitive high explosive. The charge is confined by a cylindrical steel shell, has an elliptical tin liner, and is line-initiated along its length. The detonation wave must propagate around the inner hollow region and converge on the opposite side. The Detonation Shock Dynamics (DSD) model allows for the calculation of detonation propagation in a region of explosive using a selection of material input parameters, amongst which is the D-K relation that governs how the local detonation velocity varies as a function of wave curvature. In this paper, experimental data are compared to calculations using the newly-developed 3D DSD code at AWE with a variety of D-K relations. The effects of D-K variation through different calibration methods, material lot and initial density are investigated.

  20. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  1. Silicon Carbide Derived Carbons: Experiments and Modeling

    SciTech Connect

    Kertesz, Miklos

    2011-02-28

    The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.

  2. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  3. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  4. Fundamental Rotorcraft Acoustic Modeling from Experiments (FRAME)

    NASA Astrophysics Data System (ADS)

    Greenwood, Eric, II

    2011-12-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  5. [Model experiments on breathing under sand].

    PubMed

    Maxeiner, H; Haenel, F

    1985-01-01

    Remarkable autopsy findings in persons who had suffocated as a result of closure of the mouth and nose by sand (without the body being buried) induced us to investigate some aspects of this situation by means of a simple experiment. A barrel (diameter 36.7 cm) with a mouthpiece in the bottom was filled with sand to a depth of 15, 30, 60, or 90 cm. The subject tried to breathe as long as possible through the sand, while the amount of sand inspired was measured. Pressure and volume of the breath, as well as the O2 and CO2 content were also measured. A respiratory volume of up to 31 was possible, even when the depth was 90 cm. After about 1 min in all trials, the subject's shortness of breath forced us to stop the experiment. Measurement of O2 and CO2 concentrations proved that respiratory volume in and out of the sand shifts to atmospheric air without gas exchange, even when the sand depth is 15 cm. Sand aspiration depended on the moisture of the material: when the sand was dry, it was impossible to avoid aspiration. However, even a water content of only 5% prevented aspiration, although the sand seemed to be nearly dry.

  6. Plasma gun pellet acceleration modeling and experiment

    SciTech Connect

    Kincaid, R.W.; Bourham, M.A.; Gilligan, J.G.

    1996-12-31

    Modifications to the electrothermal plasma gun SIRENS have been completed to allow for acceleration experiments using plastic pellets. Modifications have been implemented to the 1-D, time dependent code ODIN to include pellet friction, momentum, and kinetic energy with options of variable barrel length. The code results in the new version, POSEIDON, compare favorably with experimental data and with code results from ODIN. Predicted values show an increased pellet velocity along the barrel length, achieving 2 km/s exit velocity. Measured velocity, at three locations along the barrel length, showed good correlation with predicted values. The code has also been used to investigate the effectiveness of longer pulse length on pellet velocity using simulated ramp up and down currents with flat top, and triangular current pulses with early and late peaking. 16 refs., 5 figs.

  7. Global estimates of fresh submarine groundwater discharge

    NASA Astrophysics Data System (ADS)

    Luijendijk, Elco; Gleeson, Tom; Moosdorf, Nils

    2016-04-01

    Fresh submarine groundwater discharge, the flow of fresh groundwater to oceans, may be a significant contributor to the water and chemical budgets of the world's oceans. We present new estimates of the flux of fresh groundwater to the world's oceans. We couple density-dependent numerical simulations of generic models of coastal basins with geospatial databases of hydrogeological parameters and topography to resolve the rate of terrestrially-derived submarine groundwater discharge globally. We compare the model results to a new global compilation of submarine groundwater discharge observations. The results show that terrestrially-derived SGD is highly sensitive to permeability. In most watersheds only a small fraction of groundwater recharge contributes to submarine groundwater discharge, with most recharge instead contributing to terrestrial discharge in the form of baseflow or evapotranspiration. Fresh submarine groundwater discharge is only significant in watersheds that contain highly permeable sediments, such as coarse-grained siliciclastic sediments, karstic carbonates or volcanic deposits. Our estimates of global submarine groundwater discharge are much lower than most previous estimates. However, many tropical and volcanic islands are hotspots of submarine groundwater discharge and solute fluxes towards the oceans. The comparison of model results and data highlights the spatial variability of SGD and the difficulty of scaling up observations.

  8. A hydromechanical biomimetic cochlea: experiments and models.

    PubMed

    Chen, Fangyi; Cohen, Howard I; Bifano, Thomas G; Castle, Jason; Fortin, Jeffrey; Kapusta, Christopher; Mountain, David C; Zosuls, Aleks; Hubbard, Allyn E

    2006-01-01

    The construction, measurement, and modeling of an artificial cochlea (ACochlea) are presented in this paper. An artificial basilar membrane (ABM) was made by depositing discrete Cu beams on a piezomembrane substrate. Rather than two fluid channels, as in the mammalian cochlea, a single fluid channel was implemented on one side of the ABM, facilitating the use of a laser to detect the ABM vibration on the other side. Measurements were performed on both the ABM and the ACochlea. The measurement results on the ABM show that the longitudinal coupling on the ABM is very strong. Reduced longitudinal coupling was achieved by cutting the membrane between adjacent beams using a laser. The measured results from the ACochlea with a laser-cut ABM demonstrate cochlear-like features, including traveling waves, sharp high-frequency rolloffs, and place-specific frequency selectivity. Companion computational models of the mechanical devices were formulated and implemented using a circuit simulator. Experimental data were compared with simulation results. The simulation results from the computational models of the ABM and the ACochlea are similar to their experimental counterparts. PMID:16454294

  9. Improved Structure Factors for Modeling XRTS Experiments

    NASA Astrophysics Data System (ADS)

    Stanton, Liam; Murillo, Michael; Benage, John; Graziani, Frank

    2012-10-01

    Characterizing warm dense matter (WDM) has gained renewed interest due to advances in powerful lasers and next generation light sources. Because WDM is strongly coupled and moderately degenerate, we must often rely on simulations, which are necessarily based on ions interacting through a screened potential that must be determined. Given such a potential, ionic radial distribution functions (RDFs) and structure factors (SFs) can be calculated and related to XRTS data and EOS quantities. While many screening models are available, such as the Debye- (Yukawa-) potential, they are known to over-screen and are unable capture accurate bound state effects, which have been shown to contribute to both scattering data from XRTS as well as the short-range repulsion in the RDF. Here, we present a model which incorporates an improvement to the screening length in addition to a consistent treatment of the core electrons. This new potential improves the accuracy of both bound state and screening effects without contributing to the computational complexity of Debye-like models. Calculations of ionic RDFs and SFs are compared to experimental data and quantum molecular dynamics simulations for Be, Na, Mg and Al in the WDM and liquid metal regime.

  10. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1993-01-01

    The main goals of the research under this grant consist of the development of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular, for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). Of the tasks described in detail in the original proposal, two remain to be worked on: (1) development of a spectral code for moving boundary problems; and (2) diffusivity measurements on concentrated and supersaturated TGS solutions. Progress made during this seventh half-year period is reported.

  11. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1993-01-01

    The main goals of the research consist of the development of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and diffusivity measurements on concentrated and supersaturated TGS solutions. During this eighth half-year period, good progress was made on these tasks.

  12. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1992-01-01

    The development is examined of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). The tasks include development of a spectral code for moving boundary problems, kinematic viscosity measurements on liquid MCT at temperatures close to the melting point, and diffusivity measurements on concentrated and supersaturated TGS solutions. A detailed description is given of the work performed for these tasks, together with a summary of the resulting publications and presentations.

  13. Recirculating Planar Magnetron Modeling and Experiments

    NASA Astrophysics Data System (ADS)

    Franzi, Matthew; Gilgenbach, Ronald; Hoff, Brad; French, Dave; Lau, Y. Y.

    2011-10-01

    We present simulations and initial experimental results of a new class of crossed field device: Recirculating Planar Magnetrons (RPM). Two geometries of RPM are being explored: 1) Dual planar-magnetrons connected by a recirculating section with axial magnetic field and transverse electric field, and 2) Planar cathode and anode-cavity rings with radial magnetic field and axial electric field. These RPMs have numerous advantages for high power microwave generation by virtue of larger area cathodes and anodes. The axial B-field RPM can be configured in either the conventional or inverted (faster startup) configuration. Two and three-dimensional EM PIC simulations show rapid electron spoke formation and microwave oscillation in pi-mode. Smoothbore prototype axial-B RPM experiments are underway using the MELBA accelerator at parameters of -300 kV, 1-20 kA and pulselengths of 0.5-1 microsecond. Implementation and operation of the first RPM slow wave structure, operating at 1GHz, will be discussed. Research supported by AFOSR, AFRL, L-3 Communications, and Northrop Grumman. Done...processed 1830 records...17:52:57 Beginning APS data extraction...17:52:57

  14. Microbial safety of fresh produce

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The book entitled “Microbial Safety of Fresh Produce” with 23 chapters is divided into following six sections: Microbial contamination of fresh produce, Pre-harvest strategies, post-harvest interventions, Produce safety during processing and handling, Public, legal, and economic Perspectives, and Re...

  15. Indian Consortia Models: FORSA Libraries' Experiences

    NASA Astrophysics Data System (ADS)

    Patil, Y. M.; Birdie, C.; Bawdekar, N.; Barve, S.; Anilkumar, N.

    2007-10-01

    With increases in prices of journals, shrinking library budgets and cuts in subscriptions to journals over the years, there has been a big challenge facing Indian library professionals to cope with the proliferation of electronic information resources. There have been sporadic efforts by different groups of libraries in forming consortia at different levels. The types of consortia identified are generally based on various models evolved in India in a variety of forms depending upon the participants' affiliations and funding sources. Indian astronomy library professionals have formed a group called Forum for Resource Sharing in Astronomy and Astrophysics (FORSA), which falls under `Open Consortia', wherein participants are affiliated to different government departments. This is a model where professionals willingly come forward and actively support consortia formation; thereby everyone benefits. As such, FORSA has realized four consortia, viz. Nature Online Consortium; Indian Astrophysics Consortium for physics/astronomy journals of Springer/Kluwer; Consortium for Scientific American Online Archive (EBSCO); and Open Consortium for Lecture Notes in Physics (Springer), which are discussed briefly.

  16. Experiments and Valve Modelling in Thermoacoustic Device

    NASA Astrophysics Data System (ADS)

    Duthil, P.; Baltean Carlès, D.; Bétrancourt, A.; François, M. X.; Yu, Z. B.; Thermeau, J. P.

    2006-04-01

    In a so called heat driven thermoacoustic refrigerator, using either a pulse tube or a lumped boost configuration, heat pumping is induced by Stirling type thermodynamic cycles within the regenerator. The time phase between acoustic pressure and flow rate throughout must then be close to that met for a purely progressive wave. The study presented here relates the experimental characterization of passive elements such as valves, tubes and tanks which are likely to act on this phase relationship when included in the propagation line of the wave resonator. In order to carry out a characterization — from the acoustic point of view — of these elements, systematic measurements of the acoustic field are performed varying various parameters: mean pressure, oscillations frequency, supplied heat power. Acoustic waves are indeed generated by use of a thermoacoustic prime mover driving a pulse tube refrigerator. The experimental results are then compared with the solutions obtained with various one-dimensional linear models including non linear correction factors. It turns out that when using a non symmetrical valve, and for large dissipative effects, the measurements disagree with the linear modelling and non linear behaviour of this particular element is shown.

  17. The OECI model: the CRO Aviano experience.

    PubMed

    Da Pieve, Lucia; Collazzo, Raffaele; Masutti, Monica; De Paoli, Paolo

    2015-01-01

    In 2012, the "Centro di Riferimento Oncologico" (CRO) National Cancer Institute joined the accreditation program of the Organisation of European Cancer Institutes (OECI) and was one of the first institutes in Italy to receive recognition as a Comprehensive Cancer Center. At the end of the project, a strengths, weaknesses, opportunities, and threats (SWOT) analysis aimed at identifying the pros and cons, both for the institute and of the accreditation model in general, was performed. The analysis shows significant strengths, such as the affinity with other improvement systems and current regulations, and the focus on a multidisciplinary approach. The proposed suggestions for improvement concern mainly the structure of the standards and aim to facilitate the assessment, benchmarking, and sharing of best practices. The OECI accreditation model provided a valuable executive tool and a framework in which we can identify several important development projects. An additional impact for our institute is the participation in the project BenchCan, of which the OECI is lead partner. PMID:27096265

  18. Process modelling for space station experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1988-01-01

    The work performed during the first year 1 Oct. 1987 to 30 Sept. 1988 involved analyses of crystal growth from the melt and from solution. The particular melt growth technique under investigation is directional solidification by the Bridgman-Stockbarger method. Two types of solution growth systems are also being studied. One involves growth from solution in a closed container, the other concerns growth of protein crystals by the hanging drop method. Following discussions with Dr. R. J. Naumann of the Low Gravity Science Division at MSFC it was decided to tackle the analysis of crystal growth from the melt earlier than originally proposed. Rapid progress was made in this area. Work is on schedule and full calculations were underway for some time. Progress was also made in the formulation of the two solution growth models.

  19. FIELD EXPERIMENTS AND MODELING AT CDG AIRPORTS

    NASA Astrophysics Data System (ADS)

    Ramaroson, R.

    2009-12-01

    Richard Ramaroson1,4, Klaus Schaefer2, Stefan Emeis2, Carsten Jahn2, Gregor Schürmann2, Maria Hoffmann2, Mikhael Zatevakhin3, Alexandre Ignatyev3. 1ONERA, Châtillon, France; 4SEAS, Harvard University, Cambridge, USA; 2FZK, Garmisch, Germany; (3)FSUE SPbAEP, St Petersburg, Russia. 2-month field campaigns have been organized at CDG airports in autumn 2004 and summer 2005. Air quality and ground air traffic emissions have been monitored continuously at terminals and taxi-runways, along with meteorological parameters onboard trucks and with a SODAR. This paper analyses the commercial engine emissions characteristics at airports and their effects on gas pollutants and airborne particles coupled to meteorology. LES model results for PM dispersion coupled to microphysics in the PBL are compared to measurements. Winds and temperature at the surface and their vertical profiles have been stored with turbulence. SODAR observations show the time-development of the mixing layer depth and turbulent mixing in summer up to 800m. Active low level jets and their regional extent have been observed and analyzed. PM number and mass size distribution, morphology and chemical contents are investigated. Formation of new ultra fine volatile (UFV) particles in the ambient plume downstream of running engines is observed. Soot particles are mostly observed at significant level at high power thrusts at take-off (TO) and on touch-down whereas at lower thrusts at taxi and aprons ultra the UFV PM emissions become higher. Ambient airborne PM1/2.5 is closely correlated to air traffic volume and shows a maximum beside runways. PM number distribution at airports is composed mainly by volatile UF PM abundant at apron. Ambient PM mass in autumn is higher than in summer. The expected differences between TO and taxi emissions are confirmed for NO, NO2, speciated VOC and CO. NO/NO2 emissions are larger at runways due to higher power. Reactive VOC and CO are more produced at low powers during idling at

  20. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1994-01-01

    The main goals of the research under this grant consist of the development of mathematical tools and measurement techniques for transport properties necessary for high fidelity modelling of crystal growth from the melt and solution. Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and development of an expedient diffusivity measurement technique for concentrated and supersaturated solutions. We have focused on developing a code to solve for interface shape, heat and species transport during directional solidification. The work involved the computation of heat, mass and momentum transfer during Bridgman-Stockbarger solidification of compound semiconductors. Domain decomposition techniques and preconditioning methods were used in conjunction with Chebyshev spectral methods to accelerate convergence while retaining the high-order spectral accuracy. During the report period we have further improved our experimental setup. These improvements include: temperature control of the measurement cell to 0.1 C between 10 and 60 C; enclosure of the optical measurement path outside the ZYGO interferometer in a metal housing that is temperature controlled to the same temperature setting as the measurement cell; simultaneous dispensing and partial removal of the lower concentration (lighter) solution above the higher concentration (heavier) solution through independently motor-driven syringes; three-fold increase in data resolution by orientation of the interferometer with respect to diffusion direction; and increase of the optical path length in the solution cell to 12 mm.

  1. Optimal experiment design for model selection in biochemical networks

    PubMed Central

    2014-01-01

    Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498

  2. A Community Mentoring Model for STEM Undergraduate Research Experiences

    ERIC Educational Resources Information Center

    Kobulnicky, Henry A.; Dale, Daniel A.

    2016-01-01

    This article describes a community mentoring model for UREs that avoids some of the common pitfalls of the traditional paradigm while harnessing the power of learning communities to provide young scholars a stimulating collaborative STEM research experience.

  3. Underwater Blast Experiments and Modeling for Shock Mitigation

    SciTech Connect

    Glascoe, L; McMichael, L; Vandersall, K; Margraf, J

    2010-03-07

    A simple but novel mitigation concept to enforce standoff distance and reduce shock loading on a vertical, partially-submerged structure is evaluated using scaled aquarium experiments and numerical modeling. Scaled, water tamped explosive experiments were performed using three gallon aquariums. The effectiveness of different mitigation configurations, including air-filled media and an air gap, is assessed relative to an unmitigated detonation using the same charge weight and standoff distance. Experiments using an air-filled media mitigation concept were found to effectively dampen the explosive response of the aluminum plate and reduce the final displacement at plate center by approximately half. The finite element model used for the initial experimental design compares very well to the experimental DIC results both spatially and temporally. Details of the experiment and finite element aquarium models are described including the boundary conditions, Eulerian and Lagrangian techniques, detonation models, experimental design and test diagnostics.

  4. Applying modeling Results in designing a global tropospheric experiment

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A set of field experiments and advanced modeling studies which provide a strategy for a program of global tropospheric experiments was identified. An expanded effort to develop space applications for trospheric air quality monitoring and studies was recommended. The tropospheric ozone, carbon, nitrogen, and sulfur cycles are addressed. Stratospheric-tropospheric exchange is discussed. Fast photochemical processes in the free troposphere are considered.

  5. Characteristics of a Model Industrial Technology Education Field Experience.

    ERIC Educational Resources Information Center

    Foster, Phillip R.; Kozak, Michael R.

    1986-01-01

    This report contains selected findings from a research project that investigated field experiences in industrial technology education. Funded by the Texas Education Agency, the project addressed the identification of characteristics of a model field experience in industrial technology education. This was accomplished using the Delphi technique.…

  6. Engineering teacher training models and experiences

    NASA Astrophysics Data System (ADS)

    González-Tirados, R. M.

    2009-04-01

    Education Area, we renewed the programme, content and methodology, teaching the course under the name of "Initial Teacher Training Course within the framework of the European Higher Education Area". Continuous Training means learning throughout one's life as an Engineering teacher. They are actions designed to update and improve teaching staff, and are systematically offered on the current issues of: Teaching Strategies, training for research, training for personal development, classroom innovations, etc. They are activities aimed at conceptual change, changing the way of teaching and bringing teaching staff up-to-date. At the same time, the Institution is at the disposal of all teaching staff as a meeting point to discuss issues in common, attend conferences, department meetings, etc. In this Congress we present a justification of both training models and their design together with some results obtained on: training needs, participation, how it is developing and to what extent students are profiting from it.

  7. Modeling transfer of Escherichia coli O157:H7 and Listeria monocytogenes during preparation of fresh-cut salads: impact of cutting and shredding practices.

    PubMed

    Zilelidou, Evangelia A; Tsourou, Virginia; Poimenidou, Sofia; Loukou, Anneza; Skandamis, Panagiotis N

    2015-02-01

    Cutting and shredding of leafy vegetables increases the risk of cross contamination in household settings. The distribution of Escherichia coli O157:H7 and Listeria monocytogenes transfer rates (Tr) between cutting knives and lettuce leaves was investigated and a semi-mechanistic model describing the bacterial transfer during consecutive cuts of leafy vegetables was developed. For both pathogens the distribution of log10Trs from lettuce to knife was towards low values. Conversely log10Trs from knife to lettuce ranged from -2.1 to -0.1 for E. coli O157:H7 and -2.0 to 0 for L. monocytogenes, and indicated a more variable phenomenon. Regarding consecutive cuts, a rapid initial transfer was followed by an asymptotic tail at low populations moving to lettuce or residing on knife. E. coli O157:H7 was transferred at slower rates than L. monocytogenes. These trends were sufficiently described by the transfer-model, with RMSE values of 0.426-0.613 and 0.531-0.908 for L. monocytogenes and E. coli O157:H7, respectively. The model showed good performance in validation trials but underestimated bacterial transfer during extrapolation experiments. The results of the study can provide information regarding cross contamination events in a common household. The constructed model could be a useful tool for the risk-assessment during preparation of leafy-green salads.

  8. Modeling transfer of Escherichia coli O157:H7 and Listeria monocytogenes during preparation of fresh-cut salads: impact of cutting and shredding practices.

    PubMed

    Zilelidou, Evangelia A; Tsourou, Virginia; Poimenidou, Sofia; Loukou, Anneza; Skandamis, Panagiotis N

    2015-02-01

    Cutting and shredding of leafy vegetables increases the risk of cross contamination in household settings. The distribution of Escherichia coli O157:H7 and Listeria monocytogenes transfer rates (Tr) between cutting knives and lettuce leaves was investigated and a semi-mechanistic model describing the bacterial transfer during consecutive cuts of leafy vegetables was developed. For both pathogens the distribution of log10Trs from lettuce to knife was towards low values. Conversely log10Trs from knife to lettuce ranged from -2.1 to -0.1 for E. coli O157:H7 and -2.0 to 0 for L. monocytogenes, and indicated a more variable phenomenon. Regarding consecutive cuts, a rapid initial transfer was followed by an asymptotic tail at low populations moving to lettuce or residing on knife. E. coli O157:H7 was transferred at slower rates than L. monocytogenes. These trends were sufficiently described by the transfer-model, with RMSE values of 0.426-0.613 and 0.531-0.908 for L. monocytogenes and E. coli O157:H7, respectively. The model showed good performance in validation trials but underestimated bacterial transfer during extrapolation experiments. The results of the study can provide information regarding cross contamination events in a common household. The constructed model could be a useful tool for the risk-assessment during preparation of leafy-green salads. PMID:25500391

  9. Model validation for karst flow using sandbox experiments

    NASA Astrophysics Data System (ADS)

    Ye, M.; Pacheco Castro, R. B.; Tao, X.; Zhao, J.

    2015-12-01

    The study of flow in karst is complex due of the high heterogeneity of the porous media. Several approaches have been proposed in the literature to study overcome the natural complexity of karst. Some of those methods are the single continuum, double continuum and the discrete network of conduits coupled with the single continuum. Several mathematical and computing models are available in the literature for each approach. In this study one computer model has been selected for each category to validate its usefulness to model flow in karst using a sandbox experiment. The models chosen are: Modflow 2005, Modflow CFPV1 and Modflow CFPV2. A sandbox experiment was implemented in such way that all the parameters required for each model can be measured. The sandbox experiment was repeated several times under different conditions. The model validation will be carried out by comparing the results of the model simulation and the real data. This model validation will allows ud to compare the accuracy of each model and the applicability in Karst. Also we will be able to evaluate if the results of the complex models improve a lot compared to the simple models specially because some models require complex parameters that are difficult to measure in the real world.

  10. Arctic pathways of Pacific Water: Arctic Ocean Model Intercomparison experiments

    NASA Astrophysics Data System (ADS)

    Aksenov, Yevgeny; Karcher, Michael; Proshutinsky, Andrey; Gerdes, Rüdiger; de Cuevas, Beverly; Golubeva, Elena; Kauker, Frank; Nguyen, An T.; Platov, Gennady A.; Wadley, Martin; Watanabe, Eiji; Coward, Andrew C.; Nurser, A. J. George

    2016-01-01

    Pacific Water (PW) enters the Arctic Ocean through Bering Strait and brings in heat, fresh water, and nutrients from the northern Bering Sea. The circulation of PW in the central Arctic Ocean is only partially understood due to the lack of observations. In this paper, pathways of PW are investigated using simulations with six state-of-the art regional and global Ocean General Circulation Models (OGCMs). In the simulations, PW is tracked by a passive tracer, released in Bering Strait. Simulated PW spreads from the Bering Strait region in three major branches. One of them starts in the Barrow Canyon, bringing PW along the continental slope of Alaska into the Canadian Straits and then into Baffin Bay. The second begins in the vicinity of the Herald Canyon and transports PW along the continental slope of the East Siberian Sea into the Transpolar Drift, and then through Fram Strait and the Greenland Sea. The third branch begins near the Herald Shoal and the central Chukchi shelf and brings PW into the Beaufort Gyre. In the models, the wind, acting via Ekman pumping, drives the seasonal and interannual variability of PW in the Canadian Basin of the Arctic Ocean. The wind affects the simulated PW pathways by changing the vertical shear of the relative vorticity of the ocean flow in the Canada Basin.

  11. Arctic pathways of Pacific Water: Arctic Ocean Model Intercomparison experiments

    PubMed Central

    Karcher, Michael; Proshutinsky, Andrey; Gerdes, Rüdiger; de Cuevas, Beverly; Golubeva, Elena; Kauker, Frank; Nguyen, An T.; Platov, Gennady A.; Wadley, Martin; Watanabe, Eiji; Coward, Andrew C.; Nurser, A. J. George

    2016-01-01

    Abstract Pacific Water (PW) enters the Arctic Ocean through Bering Strait and brings in heat, fresh water, and nutrients from the northern Bering Sea. The circulation of PW in the central Arctic Ocean is only partially understood due to the lack of observations. In this paper, pathways of PW are investigated using simulations with six state‐of‐the art regional and global Ocean General Circulation Models (OGCMs). In the simulations, PW is tracked by a passive tracer, released in Bering Strait. Simulated PW spreads from the Bering Strait region in three major branches. One of them starts in the Barrow Canyon, bringing PW along the continental slope of Alaska into the Canadian Straits and then into Baffin Bay. The second begins in the vicinity of the Herald Canyon and transports PW along the continental slope of the East Siberian Sea into the Transpolar Drift, and then through Fram Strait and the Greenland Sea. The third branch begins near the Herald Shoal and the central Chukchi shelf and brings PW into the Beaufort Gyre. In the models, the wind, acting via Ekman pumping, drives the seasonal and interannual variability of PW in the Canadian Basin of the Arctic Ocean. The wind affects the simulated PW pathways by changing the vertical shear of the relative vorticity of the ocean flow in the Canada Basin.

  12. A Model for Supervising School Counseling Students without Teaching Experience

    ERIC Educational Resources Information Center

    Peterson, Jean Sunde; Deuschle, Connie

    2006-01-01

    Changed demographics of those now entering the field of school counseling argue for changes in preparatory curriculum, including the curriculum for supervision. The authors present a 5-component model for supervising graduate students without previous school experience that is based on 2 pertinent studies. This model focuses on information for…

  13. Foods - fresh vs. frozen or canned

    MedlinePlus

    Frozen foods vs. fresh or canned; Fresh foods vs. frozen or canned; Frozen vegetables versus fresh ... a well-balanced diet. Many people wonder if frozen and canned vegetables are as healthy for you ...

  14. A Fresh Approach

    ERIC Educational Resources Information Center

    Violino, Bob

    2011-01-01

    Facilities and services are a huge drain on community college budgets. They are also vital to the student experience. As funding dries up across the country, many institutions are taking a team approach, working with partner colleges and private service providers to offset costs and generate revenue without sacrificing the services and amenities…

  15. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  16. Radiative transfer model validations during the First ISLSCP Field Experiment

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine

    1990-01-01

    Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.

  17. Designing Experiments to Discriminate Families of Logic Models

    PubMed Central

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G.; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input–output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration. PMID:26389116

  18. Designing Experiments to Discriminate Families of Logic Models.

    PubMed

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration.

  19. Infusion of freshly isolated autologous bone marrow derived mononuclear cells prevents endotoxin-induced lung injury in an ex-vivo perfused swine model

    PubMed Central

    2013-01-01

    Introduction The acute respiratory distress syndrome (ARDS), affects up to 150,000 patients per year in the United States. We and other groups have demonstrated that bone marrow derived mesenchymal stromal stem cells prevent ARDS induced by systemic and local administration of endotoxin (lipopolysaccharide (LPS)) in mice. Methods A study was undertaken to determine the effects of the diverse populations of bone marrow derived cells on the pathophysiology of ARDS, using a unique ex-vivo swine preparation, in which only the ventilated lung and the liver are perfused with autologous blood. Six experimental groups were designated as: 1) endotoxin alone, 2) endotoxin + total fresh whole bone marrow nuclear cells (BMC), 3) endotoxin + non-hematopoietic bone marrow cells (CD45 neg), 4) endotoxin + hematopoietic bone marrow cells (CD45 positive), 5) endotoxin + buffy coat and 6) endotoxin + in vitro expanded swine CD45 negative adherent allogeneic bone marrow cells (cultured CD45neg). We measured at different levels the biological consequences of the infusion of the different subsets of cells. The measured parameters were: pulmonary vascular resistance (PVR), gas exchange (PO2), lung edema (lung wet/dry weight), gene expression and serum concentrations of the pro-inflammatory cytokines IL-1β, TNF-α and IL-6. Results Infusion of freshly purified autologous total BMCs, as well as non-hematopoietic CD45(-) bone marrow cells significantly reduced endotoxin-induced pulmonary hypertension and hypoxemia and reduced the lung edema. Also, in the groups that received BMCs and cultured CD45neg we observed a decrease in the levels of IL-1β and TNF-α in plasma. Infusion of hematopoietic CD45(+) bone marrow cells or peripheral blood buffy coat cells did not protect against LPS-induced lung injury. Conclusions We conclude that infusion of freshly isolated autologous whole bone marrow cells and the subset of non-hematopoietic cells can suppress the acute humoral and physiologic

  20. Cognitive Modeling of Video Game Player User Experience

    NASA Technical Reports Server (NTRS)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  1. Modeling a ponded infiltration experiment at Yucca Mountain, NV

    SciTech Connect

    Hudson, D.B.; Guertal, W.R.; Flint, A.L.

    1994-12-31

    Yucca Mountain, Nevada is being evaluated as a potential site for a geologic repository for high level radioactive waste. As part of the site characterization activities at Yucca Mountain, a field-scale ponded infiltration experiment was done to help characterize the hydraulic and infiltration properties of a layered dessert alluvium deposit. Calcium carbonate accumulation and cementation, heterogeneous layered profiles, high evapotranspiration, low precipitation, and rocky soil make the surface difficult to characterize.The effects of the strong morphological horizonation on the infiltration processes, the suitability of measured hydraulic properties, and the usefulness of ponded infiltration experiments in site characterization work were of interest. One-dimensional and two-dimensional radial flow numerical models were used to help interpret the results of the ponding experiment. The objective of this study was to evaluate the results of a ponded infiltration experiment done around borehole UE25 UZN {number_sign}85 (N85) at Yucca Mountain, NV. The effects of morphological horizons on the infiltration processes, lateral flow, and measured soil hydaulic properties were studied. The evaluation was done by numerically modeling the results of a field ponded infiltration experiment. A comparison the experimental results and the modeled results was used to qualitatively indicate the degree to which infiltration processes and the hydaulic properties are understood. Results of the field characterization, soil characterization, borehole geophysics, and the ponding experiment are presented in a companion paper.

  2. Neutral null models for diversity in serial transfer evolution experiments.

    PubMed

    Harpak, Arbel; Sella, Guy

    2014-09-01

    Evolution experiments with microorganisms coupled with genome-wide sequencing now allow for the systematic study of population genetic processes under a wide range of conditions. In learning about these processes in natural, sexual populations, neutral models that describe the behavior of diversity and divergence summaries have played a pivotal role. It is therefore natural to ask whether neutral models, suitably modified, could be useful in the context of evolution experiments. Here, we introduce coalescent models for polymorphism and divergence under the most common experimental evolution assay, a serial transfer experiment. This relatively simple setting allows us to address several issues that could affect diversity patterns in evolution experiments, whether selection is operating or not: the transient behavior of neutral polymorphism in an experiment beginning from a single clone, the effects of randomness in the timing of cell division and noisiness in population size in the dilution stage. In our analyses and discussion, we emphasize the implications for experiments aimed at measuring diversity patterns and making inferences about population genetic processes based on these measurements.

  3. Cryogenic Tank Modeling for the Saturn AS-203 Experiment

    NASA Technical Reports Server (NTRS)

    Grayson, Gary D.; Lopez, Alfredo; Chandler, Frank O.; Hastings, Leon J.; Tucker, Stephen P.

    2006-01-01

    A computational fluid dynamics (CFD) model is developed for the Saturn S-IVB liquid hydrogen (LH2) tank to simulate the 1966 AS-203 flight experiment. This significant experiment is the only known, adequately-instrumented, low-gravity, cryogenic self pressurization test that is well suited for CFD model validation. A 4000-cell, axisymmetric model predicts motion of the LH2 surface including boil-off and thermal stratification in the liquid and gas phases. The model is based on a modified version of the commercially available FLOW3D software. During the experiment, heat enters the LH2 tank through the tank forward dome, side wall, aft dome, and common bulkhead. In both model and test the liquid and gases thermally stratify in the low-gravity natural convection environment. LH2 boils at the free surface which in turn increases the pressure within the tank during the 5360 second experiment. The Saturn S-IVB tank model is shown to accurately simulate the self pressurization and thermal stratification in the 1966 AS-203 test. The average predicted pressurization rate is within 4% of the pressure rise rate suggested by test data. Ullage temperature results are also in good agreement with the test where the model predicts an ullage temperature rise rate within 6% of the measured data. The model is based on first principles only and includes no adjustments to bring the predictions closer to the test data. Although quantitative model validation is achieved or one specific case, a significant step is taken towards demonstrating general use of CFD for low-gravity cryogenic fluid modeling.

  4. Experience of Time Passage:. Phenomenology, Psychophysics, and Biophysical Modelling

    NASA Astrophysics Data System (ADS)

    Wackermann, Jiří

    2005-10-01

    The experience of time's passing appears, from the 1st person perspective, to be a primordial subjective experience, seemingly inaccessible to the 3rd person accounts of time perception (psychophysics, cognitive psychology). In our analysis of the `dual klepsydra' model of reproduction of temporal durations, time passage occurs as a cognitive construct, based upon more elementary (`proto-cognitive') function of the psychophysical organism. This conclusion contradicts the common concepts of `subjective' or `psychological' time as readings of an `internal clock'. Our study shows how phenomenological, experimental and modelling approaches can be fruitfully combined.

  5. Modeling a set of heavy oil aqueous pyrolysis experiments

    SciTech Connect

    Thorsness, C.B.; Reynolds, J.G.

    1996-11-01

    Aqueous pyrolysis experiments, aimed at mild upgrading of heavy oil, were analyzed using various computer models. The primary focus of the analysis was the pressure history of the closed autoclave reactors obtained during the heating of the autoclave to desired reaction temperatures. The models used included a means of estimating nonideal behavior of primary components with regard to vapor liquid equilibrium. The modeling indicated that to match measured autoclave pressures, which often were well below the vapor pressure of water at a given temperature, it was necessary to incorporate water solubility in the oil phase and an activity model for the water in the oil phase which reduced its fugacity below that of pure water. Analysis also indicated that the mild to moderate upgrading of the oil which occurred in experiments that reached 400{degrees}C or more using a FE(III) 2-ethylhexanoate could be reasonably well characterized by a simple first order rate constant of 1.7xl0{sup 8} exp(-20000/T)s{sup {minus}l}. Both gas production and API gravity increase were characterized by this rate constant. Models were able to match the complete pressure history of the autoclave experiments fairly well with relatively simple equilibria models. However, a consistent lower than measured buildup in pressure at peak temperatures was noted in the model calculations. This phenomena was tentatively attributed to an increase in the amount of water entering the vapor phase caused by a change in its activity in the oil phase.

  6. Postharvest treatments of fresh produce.

    PubMed

    Mahajan, P V; Caleb, O J; Singh, Z; Watkins, C B; Geyer, M

    2014-06-13

    Postharvest technologies have allowed horticultural industries to meet the global demands of local and large-scale production and intercontinental distribution of fresh produce that have high nutritional and sensory quality. Harvested products are metabolically active, undergoing ripening and senescence processes that must be controlled to prolong postharvest quality. Inadequate management of these processes can result in major losses in nutritional and quality attributes, outbreaks of foodborne pathogens and financial loss for all players along the supply chain, from growers to consumers. Optimal postharvest treatments for fresh produce seek to slow down physiological processes of senescence and maturation, reduce/inhibit development of physiological disorders and minimize the risk of microbial growth and contamination. In addition to basic postharvest technologies of temperature management, an array of others have been developed including various physical (heat, irradiation and edible coatings), chemical (antimicrobials, antioxidants and anti-browning) and gaseous treatments. This article examines the current status on postharvest treatments of fresh produce and emerging technologies, such as plasma and ozone, that can be used to maintain quality, reduce losses and waste of fresh produce. It also highlights further research needed to increase our understanding of the dynamic response of fresh produce to various postharvest treatments. PMID:24797137

  7. Postharvest treatments of fresh produce

    PubMed Central

    Mahajan, P. V.; Caleb, O. J.; Singh, Z.; Watkins, C. B.; Geyer, M.

    2014-01-01

    Postharvest technologies have allowed horticultural industries to meet the global demands of local and large-scale production and intercontinental distribution of fresh produce that have high nutritional and sensory quality. Harvested products are metabolically active, undergoing ripening and senescence processes that must be controlled to prolong postharvest quality. Inadequate management of these processes can result in major losses in nutritional and quality attributes, outbreaks of foodborne pathogens and financial loss for all players along the supply chain, from growers to consumers. Optimal postharvest treatments for fresh produce seek to slow down physiological processes of senescence and maturation, reduce/inhibit development of physiological disorders and minimize the risk of microbial growth and contamination. In addition to basic postharvest technologies of temperature management, an array of others have been developed including various physical (heat, irradiation and edible coatings), chemical (antimicrobials, antioxidants and anti-browning) and gaseous treatments. This article examines the current status on postharvest treatments of fresh produce and emerging technologies, such as plasma and ozone, that can be used to maintain quality, reduce losses and waste of fresh produce. It also highlights further research needed to increase our understanding of the dynamic response of fresh produce to various postharvest treatments. PMID:24797137

  8. Community Climate System Model (CCSM) Experiments and Output Data

    DOE Data Explorer

    The National Center for Atmospheric Research (NCAR) created the first version of the Community Climate Model (CCM) in 1983 as a global atmosphere model. It was improved in 1994 when NCAR, with support from the National Science Foundation (NSF), developed and incorporated a Climate System Model (CSM) that included atmosphere, land surface, ocean, and sea ice. As the capabilities of the model grew, so did interest in its applications and changes in how it would be managed. A workshop in 1996 set the future management structure, marked the beginning of the second phase of the model, a phase that included full participation of the scientific community, and also saw additional financial support, including support from the Department of Energy. In recognition of these changes, the model was renamed to the Community Climate System Model (CCSM). It began to function as a model with the interactions of land, sea, and air fully coupled, providing computer simulations of Earth's past climate, its present climate, and its possible future climate. The CCSM website at http://www2.cesm.ucar.edu/ describes some of the research that has been done since then: A 300-year run has been performed using the CSM, and results from this experiment have appeared in a special issue of theJournal of Climate, 11, June, 1998. A 125-year experiment has been carried out in which carbon dioxide was described to increase at 1% per year from its present concentration to approximately three times its present concentration. More recently, the Climate of the 20th Century experiment was run, with carbon dioxide and other greenhouse gases and sulfate aerosols prescribed to evolve according to our best knowledge from 1870 to the present. Three scenarios for the 21st century were developed: a "business as usual" experiment, in which greenhouse gases are assumed to increase with no economic constraints; an experiment using the Intergovernmental Panel on Climate Change (IPCC) Scenario A1; and a "policy

  9. Second-order model selection in mixture experiments

    SciTech Connect

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.

    1992-07-01

    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  10. Optimization of Time-Course Experiments for Kinetic Model Discrimination

    PubMed Central

    Lages, Nuno F.; Cordeiro, Carlos; Sousa Silva, Marta; Ponces Freire, Ana; Ferreira, António E. N.

    2012-01-01

    Systems biology relies heavily on the construction of quantitative models of biochemical networks. These models must have predictive power to help unveiling the underlying molecular mechanisms of cellular physiology, but it is also paramount that they are consistent with the data resulting from key experiments. Often, it is possible to find several models that describe the data equally well, but provide significantly different quantitative predictions regarding particular variables of the network. In those cases, one is faced with a problem of model discrimination, the procedure of rejecting inappropriate models from a set of candidates in order to elect one as the best model to use for prediction. In this work, a method is proposed to optimize the design of enzyme kinetic assays with the goal of selecting a model among a set of candidates. We focus on models with systems of ordinary differential equations as the underlying mathematical description. The method provides a design where an extension of the Kullback-Leibler distance, computed over the time courses predicted by the models, is maximized. Given the asymmetric nature this measure, a generalized differential evolution algorithm for multi-objective optimization problems was used. The kinetics of yeast glyoxalase I (EC 4.4.1.5) was chosen as a difficult test case to evaluate the method. Although a single-substrate kinetic model is usually considered, a two-substrate mechanism has also been proposed for this enzyme. We designed an experiment capable of discriminating between the two models by optimizing the initial substrate concentrations of glyoxalase I, in the presence of the subsequent pathway enzyme, glyoxalase II (EC 3.1.2.6). This discriminatory experiment was conducted in the laboratory and the results indicate a two-substrate mechanism for the kinetics of yeast glyoxalase I. PMID:22403703

  11. The bioconcentration of {sup 131}I in fresh water fish

    SciTech Connect

    Yu, K.N.; Cheung, T.; Young, E.C.M.; Luo, D.L.

    1996-11-01

    The dynamic characteristics of the radionuclide concentration process in fresh water fish have been studied. The experimental data for the tilapias were fitted using a simple compartment model to get characteristics parameters such as concentration factors, elimination rate constants, and initial concentration rates, which are 3.08 Bq kg{sup {minus}1}/Bq L{sup {minus}1}, 0.00573 h{sup {minus}1}, and 12.42 Bq kg{sup {minus}1} h{sup {minus}1}, respectively. The relative concentrations of {sup 131}I in different parts, i.e., head, gills, flesh, bone and internal organs, of the tilapias are also determined, which are found to be 10.8, 15.4, 26.1, 11.0, and 37.0%, respectively. The effects of different factors on the transfer of radionuclides in fresh water fishes are also discussed. Experiments on the tilapias and the common carp show that the variation of concentration factors for different species may be significant even for the same radionuclide and the same ecological system. On the other hand, the variation in the concentration factors for the flesh of the tilapias is not significant for a certain range of {sup 131}I concentrations in the water. 12 refs., 1 fig., 1 tab.

  12. Integrated Experiment and Modeling of Insensitive High Explosives

    NASA Astrophysics Data System (ADS)

    Stewart, D. Scott; Lambert, David E.; Yoo, Sunhee; Lieber, M.; Holman, Steven

    2009-06-01

    New design paradigms for insensitive high explosives are being sought for use in munitions applications that require enhanced, safety, reliability and performance. We describe recent work of our group that uses an integrated approach to develop predictive models, guided by experiments. Insensitive explosive can have relatively longer detonation reaction zones and slower reaction rates than their sensitive counterparts. We employ reactive flow models that are constrained by detonation shock dynamics to pose candidate predictive models. We discuss variation of the pressure dependent reaction rate exponent and reaction order, on the length of the supporting reaction zone, the detonation velocity curvature relation, computed critical energy required for initiation, the relation between the diameter effect curve and the corresponding normal detonation velocity curvature relation. We discuss representative characterization experiments carried out at Eglin, AFB and the constraints imposed on models by a standardized experimental characterization sequence.

  13. Biogeochemical Reactive Transport Model of the Redox Zone Experiment of the Aespoe Hard Rock Laboratory in Sweden

    SciTech Connect

    Molinero-Huguet, Jorge; Samper-Calvete, F. Javier; Zhang Guoxiang; Yang Changbing

    2004-11-15

    Underground facilities are being operated by several countries around the world for performing research and demonstration of the safety of deep radioactive waste repositories. The Aespoe Hard Rock Laboratory is one such facility launched and operated by the Swedish Nuclear Fuel and Waste Management Company where various in situ experiments have been performed in fractured granites. One such experiment is the redox zone experiment, which aimed at evaluating the effects of the construction of an access tunnel on the hydrochemical conditions of a fracture zone. Dilution of the initially saline groundwater by fresh recharge water is the dominant process controlling the hydrochemical evolution of most chemical species, except for bicarbonate and sulfate, which unexpectedly increase with time. We present a numerical model of water flow, reactive transport, and microbial processes for the redox zone experiment. This model provides a plausible quantitatively based explanation for the unexpected evolution of bicarbonate and sulfate, reproduces the breakthrough curves of other reactive species, and is consistent with previous hydrogeological and solute transport models.

  14. A Mitigation Scheme for Underwater Blast: Experiments and Modeling

    NASA Astrophysics Data System (ADS)

    Glascoe, Lee; McMichael, Larry; Vandersall, Kevin; Margraf, Jon

    2011-06-01

    A novel but relatively easy-to-implement mitigation concept to enforce standoff distance and reduce shock loading on a vertical, partially-submerged structure is evaluated experimentally using scaled aquarium experiments and numerically using a high-fidelity finite element code. Scaled, water-tamped explosive experiments were performed using aquariums of two different sizes. The effectiveness of different mitigation configurations, including air-filled media and an air gap, is assessed relative to an unmitigated detonation using the same charge weight and standoff distance. Experiments using an air-filled media mitigation concept were found to effectively dampen the explosive response of an aluminum plate and reduce the final displacement at plate center by approximately half; an experiment using an air-gap alone resulted in a focused water jet. The finite element model used for the initial experimental design compares very well to the experimental DIC results both spatially and temporally. Details of the experiment and the finite element models of the aquarium, as well as a larger hypothetical structure, are described including the boundary conditions, numerical techniques, detonation models, experimental design and test diagnostics. This work was performed under the auspices of the US DOE by LLNL under Contract DE-AC52-07NA27344. We would like to thank DHS S & T Directorate for support and assistance.

  15. Experiments and Modeling of G-Jitter Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Leslie, F. W.; Ramachandran, N.; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    While there is a general understanding of the acceleration environment onboard an orbiting spacecraft, past research efforts in the modeling and analysis area have still not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter can use to assess how an experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling for better understanding the effect of residual gravity and gentler on experiments. The approach is to use magnetic fluids that respond to an imposed magnetic field gradient in much the same way as fluid density responds to a gravitational field. By utilizing a programmable power source in conjunction with an electromagnet, both static and dynamic body forces can be simulated in lab experiments. The paper provides an overview of the technique and includes recent results from the experiments.

  16. Enriching the Student Experience Through a Collaborative Cultural Learning Model.

    PubMed

    McInally, Wendy; Metcalfe, Sharon; Garner, Bonnie

    2015-01-01

    This article provides a knowledge and understanding of an international, collaborative, cultural learning model for students from the United States and Scotland. Internationalizing the student experience has been instrumental for student learning for the past eight years. Both countries have developed programs that have enriched and enhanced the overall student learning experience, mainly through the sharing of evidence-based care in both hospital and community settings. Student learning is at the heart of this international model, and through practice learning, leadership, and reflective practice, student immersion in global health care and practice is immense. Moving forward, we are seeking new opportunities to explore learning partnerships to provide this collaborative cultural learning experience. PMID:26376575

  17. EBCE: The Far West Model. Experience-Based Career Education.

    ERIC Educational Resources Information Center

    Far West Lab. for Educational Research and Development, San Francisco, CA.

    This package of handbooks on the Far West Laboratory version of Experience-Based Career Education provides information on the distinctive features of the Far West model. Part 1 on management has four handbooks: program overview; administration (attendance, budget, insurance and liability, schedules, staff, troubleshooting and problem solving);…

  18. [Model experiments on sorption properties of beet-root fiber].

    PubMed

    Glagoleva, L E; Rodionova, N S; Gisak, S N; Zatsepilina, N P

    2010-01-01

    Model experiments provided results of determining sorbate properties of beet-root fiber in respect of copper, plumbum and zinc in diary foods. It was determined that this fiber makes possible the absorbing of the above mentioned heavy metal, which increases the hygienic safety of the studied diary foods.

  19. Online Education as Interactive Experience: Some Guiding Models.

    ERIC Educational Resources Information Center

    McLellan, Hilary

    1999-01-01

    Presents conceptual models and proven practices that are emerging at the convergence of economics, entertainment, and virtual community-building and applies them to the design of online courses. Discusses the experience economy; digital storytelling; social presence; personal space and computer interaction; and online technologies and types of…

  20. Design of Experiments, Model Calibration and Data Assimilation

    SciTech Connect

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  1. Design of spatial experiments: Model fitting and prediction

    SciTech Connect

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  2. Modeling of laser knife-edge and pinhole experiments

    SciTech Connect

    Auerbach, J M; Boley, C D; Estabrook, K G; Feit, M D; Rubenchik, A M

    1998-06-29

    We describe simulations of experiments involving laser illumination of a metallic knife edge in the Optical Sciences Laboratory (OSL) at LLNL, and pinhole closure in the Beamlet experiment at LLNL. The plasma evolution is modeled via LASNEX. In OSL, the calculated phases of a probe beam are found to exhibit the same behavior as in experiment but to be consistently larger. The motion of a given phase contour tends to decelerate at high intensities. At fixed intensity, the speed decreases with atomic mass. We then calculate the plasmas associated with 4-leaf pinholes on the Beamlet transport spatial filter. We employ a new propagation code to follow a realistic input beam through the entire spatial filter, including the plasmas. The detailed behavior of the output wavefronts is obtained. We show how closure depends on the orientation and material of the pinhole blades. As observed in experiment, a diamond orientation is preferable to a square orientation, and tantalum performs better than stainless

  3. Manipulators with flexible links: A simple model and experiments

    NASA Technical Reports Server (NTRS)

    Shimoyama, Isao; Oppenheim, Irving J.

    1989-01-01

    A simple dynamic model proposed for flexible links is briefly reviewed and experimental control results are presented for different flexible systems. A simple dynamic model is useful for rapid prototyping of manipulators and their control systems, for possible application to manipulator design decisions, and for real time computation as might be applied in model based or feedforward control. Such a model is proposed, with the further advantage that clear physical arguments and explanations can be associated with its simplifying features and with its resulting analytical properties. The model is mathematically equivalent to Rayleigh's method. Taking the example of planar bending, the approach originates in its choice of two amplitude variables, typically chosen as the link end rotations referenced to the chord (or the tangent) motion of the link. This particular choice is key in establishing the advantageous features of the model, and it was used to support the series of experiments reported.

  4. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... freezing will not preclude use of the term “fresh frozen” to describe the food. “Quickly frozen” means... 21 Food and Drugs 2 2010-04-01 2010-04-01 false âFresh,â âfreshly frozen,â âfresh frozen,â âfrozen fresh.â 101.95 Section 101.95 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH...

  5. Scattering Models and Basic Experiments in the Microwave Regime

    NASA Technical Reports Server (NTRS)

    Fung, A. K.; Blanchard, A. J. (Principal Investigator)

    1985-01-01

    The objectives of research over the next three years are: (1) to develop a randomly rough surface scattering model which is applicable over the entire frequency band; (2) to develop a computer simulation method and algorithm to simulate scattering from known randomly rough surfaces, Z(x,y); (3) to design and perform laboratory experiments to study geometric and physical target parameters of an inhomogeneous layer; (4) to develop scattering models for an inhomogeneous layer which accounts for near field interaction and multiple scattering in both the coherent and the incoherent scattering components; and (5) a comparison between theoretical models and measurements or numerical simulation.

  6. Freshly brewed continental crust

    NASA Astrophysics Data System (ADS)

    Gazel, E.; Hayes, J. L.; Caddick, M. J.; Madrigal, P.

    2015-12-01

    Earth's crust is the life-sustaining interface between our planet's deep interior and surface. Basaltic crusts similar to Earth's oceanic crust characterize terrestrial planets in the solar system while the continental masses, areas of buoyant, thick silicic crust, are a unique characteristic of Earth. Therefore, understanding the processes responsible for the formation of continents is fundamental to reconstructing the evolution of our planet. We use geochemical and geophysical data to reconstruct the evolution of the Central American Land Bridge (Costa Rica and Panama) over the last 70 Ma. We also include new preliminary data from a key turning point (~12-6 Ma) from the evolution from an oceanic arc depleted in incompatible elements to a juvenile continental mass in order to evaluate current models of continental crust formation. We also discovered that seismic P-waves (body waves) travel through the crust at velocities closer to the ones observed in continental crust worldwide. Based on global statistical analyses of all magmas produced today in oceanic arcs compared to the global average composition of continental crust we developed a continental index. Our goal was to quantitatively correlate geochemical composition with the average P-wave velocity of arc crust. We suggest that although the formation and evolution of continents may involve many processes, melting enriched oceanic crust within a subduction zone, a process probably more common in the Achaean where most continental landmasses formed, can produce the starting material necessary for juvenile continental crust formation.

  7. Comparison of a radial fractional transport model with tokamak experiments

    SciTech Connect

    Kullberg, A. Morales, G. J.; Maggs, J. E.

    2014-03-15

    A radial fractional transport model [Kullberg et al., Phys. Rev. E 87, 052115 (2013)], that correctly incorporates the geometric effects of the domain near the origin and removes the singular behavior at the outer boundary, is compared to results of off-axis heating experiments performed in the Rijnhuizen Tokamak Project (RTP), ASDEX Upgrade, JET, and DIII-D tokamak devices. This comparative study provides an initial assessment of the presence of fractional transport phenomena in magnetic confinement experiments. It is found that the nonlocal radial model is robust in describing the steady-state temperature profiles from RTP, but for the propagation of heat waves in ASDEX Upgrade, JET, and DIII-D the model is not clearly superior to predictions based on Fick's law. However, this comparative study does indicate that the order of the fractional derivative, α, is likely a function of radial position in the devices surveyed.

  8. Physical mechanism of the Schwarzschild effect in film dosimetry—theoretical model and comparison with experiments

    NASA Astrophysics Data System (ADS)

    Djouguela, A.; Kollhoff, R.; Rühmann, A.; Willborn, K. C.; Harder, D.; Poppe, B.

    2006-09-01

    In consideration of the importance of film dosimetry for the dosimetric verification of IMRT treatment plans, the Schwarzschild effect or failure of the reciprocity law, i.e. the reduction of the net optical density under 'protraction' or 'fractionation' conditions at constant dose, has been experimentally studied for Kodak XOMAT-V (Martens et al 2002 Phys. Med. Biol. 47 2221-34) and EDR 2 dosimetry films (Djouguela et al 2005 Phys. Med. Biol. 50 N317-N321). It is known that this effect results from the competition between two solid-state physics reactions involved in the latent-image formation of the AgBr crystals, the aggregation of two Ag atoms freshly formed from Ag+ ions near radiation-induced occupied electron traps and the spontaneous decomposition of the Ag atoms. In this paper, we are developing a mathematical model of this mechanism which shows that the interplay of the mean lifetime τ of the Ag atoms with the time pattern of the irradiation determines the magnitude of the observed effects of the temporal dose distribution on the net optical density. By comparing this theory with our previous protraction experiments and recent fractionation experiments in which the duration of the pause between fractions was varied, a value of the time constant τ of roughly 10 s at room temperature has been determined for EDR 2. The numerical magnitude of the Schwarzschild effect in dosimetry films under the conditions generally met in radiotherapy amounts to only a few per cent of the net optical density (net OD), so that it can frequently be neglected from the viewpoint of clinical applications. But knowledge of the solid-state physical mechanism and a description in terms of a mathematical model involving a typical time constant of about 10 s are now available to estimate the magnitude of the effect should the necessity arise, i.e. in cases of large fluctuations of the temporal pattern of film exposure.

  9. High pressure experiments with a Mars general circulation model

    NASA Technical Reports Server (NTRS)

    Haberle, R. M.; Pollack, J. B.; Murphy, J. R.; Schaeffer, J.; Lee, H.

    1992-01-01

    The interaction of three physical processes will determine the stability of the Martian polar caps as the surface pressure increases: the greenhouse effect, atmospheric heat transport, and the change in the CO2 frost point temperature. The contribution of each is readily determined in the Mars general circulation model (GCM). Therefore, we have initiated experiments with the GCM to determine how these processes interact, and how the atmosphere-polar cap system responds to increasing surface pressure. The experiments are carried out for northern winter solstice and generally assume the atmosphere to be free of dust. Each experiment starts from resting isothermal conditions and runs for 50 Mars days. Mars' current orbital parameters are used. The experiments are for surface pressures of 120, 480, and 960 mb, which represent 16, 64, and 128 times the current value. To date we have analyzed the 120 mb experiment and the results indicate the contrary to the simpler models, the polar caps actually advance instead of retreat. Other aspects of this investigation are presented.

  10. Experiences & Tools from Modeling Instruction Applied to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Cervenec, J.; Landis, C. E.

    2012-12-01

    The Framework for K-12 Science Education calls for stronger curricular connections within the sciences, greater depth in understanding, and tasks higher on Bloom's Taxonomy. Understanding atmospheric sciences draws on core knowledge traditionally taught in physics, chemistry, and in some cases, biology. If this core knowledge is not conceptually sound, well retained, and transferable to new settings, understanding the causes and consequences of climate changes become a task in memorizing seemingly disparate facts to a student. Fortunately, experiences and conceptual tools have been developed and refined in the nationwide network of Physics Modeling and Chemistry Modeling teachers to build necessary understanding of conservation of mass, conservation of energy, particulate nature of matter, kinetic molecular theory, and particle model of light. Context-rich experiences are first introduced for students to construct an understanding of these principles and then conceptual tools are deployed for students to resolve misconceptions and deepen their understanding. Using these experiences and conceptual tools takes an investment of instructional time, teacher training, and in some cases, re-envisioning the format of a science classroom. There are few financial barriers to implementation and students gain a greater understanding of the nature of science by going through successive cycles of investigation and refinement of their thinking. This presentation shows how these experiences and tools could be used in an Earth Science course to support students developing conceptually rich understanding of the atmosphere and connections happening within.

  11. First Results of the Regional Earthquake Likelihood Models Experiment

    USGS Publications Warehouse

    Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.

    2010-01-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).

  12. Data Reduction Modeling of a Graphite Nitridation Experiment

    NASA Astrophysics Data System (ADS)

    Bauman, Paul; Moser, Robert

    2012-11-01

    In this work, we present a computational study of a flow tube experiment in order to infer reaction parameters for graphite nitridation. This study builds off previous work where it was determined existing, simplified models were inadequate to meaningfully inform parameters of the nitridation reaction of interest. We construct a two-dimensional representation of the experimental setup and model the flow using a reacting low-Mach number approximation to the Navier-Stokes equations. We use a stabilized finite element method to approximate the mathematical model. To solve the inverse problem for the reacting parameters, we employ a Bayesian approach in order to produce probability distributions that can used to quantify uncertainty in models where surface nitridation is important. Such models include the surface ablation of thermal protection systems of reentry vehicles. This material is based in part upon work supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  13. E-health stakeholders experiences with clinical modelling and standardizations.

    PubMed

    Gøeg, Kirstine Rosenbeck; Elberg, Pia Britt; Højen, Anne Randorff

    2015-01-01

    Stakeholders in e-health such as governance officials, health IT-implementers and vendors have to co-operate to achieve the goal of a future-proof interoperable e-health infrastructure. Co-operation requires knowledge on the responsibility and competences of stakeholder groups. To increase awareness on clinical modeling and standardization we conducted a workshop for Danish and a few Norwegian e-health stakeholders' and made them discuss their views on different aspects of clinical modeling using a theoretical model as a point of departure. Based on the model, we traced stakeholders' experiences. Our results showed there was a tendency that stakeholders were more familiar with e-health requirements than with design methods, clinical information models and clinical terminology as they are described in the scientific literature. The workshop made it possible for stakeholders to discuss their roles and expectations to each other.

  14. Modeling and Simulation of Fluid Mixing Laser Experiments and Supernova

    SciTech Connect

    Glimm, James

    2008-06-24

    The three year plan for this project is to develop novel theories and advanced simulation methods leading to a systematic understanding of turbulent mixing. A primary focus is the comparison of simulation models (both Direct Numerical Simulation and subgrid averaged models) to experiments. The comprehension and reduction of experimental and simulation data are central goals of this proposal. We will model 2D and 3D perturbations of planar interfaces. We will compare these tests with models derived from averaged equations (our own and those of others). As a second focus, we will develop physics based subgrid simulation models of diffusion across an interface, with physical but no numerical mass diffusion. We will conduct analytic studies of mix, in support of these objectives. Advanced issues, including multiple layers and reshock, will be considered.

  15. Modeling and Simulation of Fluid Mixing Laser Experiments and Supernova

    SciTech Connect

    James Glimm

    2009-06-04

    The three year plan for this project was to develop novel theories and advanced simulation methods leading to a systematic understanding of turbulent mixing. A primary focus is the comparison of simulation models (Direct Numerical Simulation (DNS), Large Eddy Simulations (LES), full two fluid simulations and subgrid averaged models) to experiments. The comprehension and reduction of experimental and simulation data are central goals of this proposal. We model 2D and 3D perturbations of planar or circular interfaces. We compare these tests with models derived from averaged equations (our own and those of others). As a second focus, we develop physics based subgrid simulation models of diffusion across an interface, with physical but no numerical mass diffusion. Multiple layers and reshock are considered here.

  16. Nanodosimetry of electrons: analysis by experiment and modelling.

    PubMed

    Bantsar, A; Pszona, S

    2015-09-01

    Nanodosimetry experiments for high-energy electrons from a (131)I radioactive source interacting with gaseous nitrogen with sizes on a scale equivalent to the mass per area of a segment of DNA and nucleosome are described. The discrete ionisation cluster-size distributions were measured in experiments carried out with the Jet Counter. The experimental results were compared with those obtained by Monte Carlo modelling. The descriptors of radiation damages have been derived from the data obtained from ionisation cluster-size distributions.

  17. Future high precision experiments and new physics beyond Standard Model

    SciTech Connect

    Luo, Mingxing

    1993-04-01

    High precision (< 1%) electroweak experiments that have been done or are likely to be done in this decade are examined on the basis of Standard Model (SM) predictions of fourteen weak neutral current observables and fifteen W and Z properties to the one-loop level, the implications of the corresponding experimental measurements to various types of possible new physics that enter at the tree or loop level were investigated. Certain experiments appear to have special promise as probes of the new physics considered here.

  18. Future high precision experiments and new physics beyond Standard Model

    SciTech Connect

    Luo, Mingxing.

    1993-01-01

    High precision (< 1%) electroweak experiments that have been done or are likely to be done in this decade are examined on the basis of Standard Model (SM) predictions of fourteen weak neutral current observables and fifteen W and Z properties to the one-loop level, the implications of the corresponding experimental measurements to various types of possible new physics that enter at the tree or loop level were investigated. Certain experiments appear to have special promise as probes of the new physics considered here.

  19. Analysis of NIF experiments with the minimal energy implosion model

    NASA Astrophysics Data System (ADS)

    Cheng, B.; Kwan, T. J. T.; Wang, Y. M.; Merrill, F. E.; Cerjan, C. J.; Batha, S. H.

    2015-08-01

    We apply a recently developed analytical model of implosion and thermonuclear burn to fusion capsule experiments performed at the National Ignition Facility that used low-foot and high-foot laser pulse formats. Our theoretical predictions are consistent with the experimental data. Our studies, together with neutron image analysis, reveal that the adiabats of the cold fuel in both low-foot and high-foot experiments are similar. That is, the cold deuterium-tritium shells in those experiments are all in a high adiabat state at the time of peak implosion velocity. The major difference between low-foot and high-foot capsule experiments is the growth of the shock-induced instabilities developed at the material interfaces which lead to fuel mixing with ablator material. Furthermore, we have compared the NIF capsules performance with the ignition criteria and analyzed the alpha particle heating in the NIF experiments. Our analysis shows that alpha heating was appreciable only in the high-foot experiments.

  20. Analysis of NIF experiments with the minimal energy implosion model

    SciTech Connect

    Cheng, B. Kwan, T. J. T.; Wang, Y. M.; Merrill, F. E.; Batha, S. H.; Cerjan, C. J.

    2015-08-15

    We apply a recently developed analytical model of implosion and thermonuclear burn to fusion capsule experiments performed at the National Ignition Facility that used low-foot and high-foot laser pulse formats. Our theoretical predictions are consistent with the experimental data. Our studies, together with neutron image analysis, reveal that the adiabats of the cold fuel in both low-foot and high-foot experiments are similar. That is, the cold deuterium-tritium shells in those experiments are all in a high adiabat state at the time of peak implosion velocity. The major difference between low-foot and high-foot capsule experiments is the growth of the shock-induced instabilities developed at the material interfaces which lead to fuel mixing with ablator material. Furthermore, we have compared the NIF capsules performance with the ignition criteria and analyzed the alpha particle heating in the NIF experiments. Our analysis shows that alpha heating was appreciable only in the high-foot experiments.

  1. Multicomponent reactive transport modeling of uranium bioremediation field experiments

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Morrison, Stan J.; Amonette, James E.; Long, Philip E.

    2009-10-15

    Biostimulation field experiments with acetate amendment are being performed at a former uranium mill tailings site in Rifle, Colorado, to investigate subsurface processes controlling in situ bioremediation of uranium-contaminated groundwater. An important part of the research is identifying and quantifying field-scale models of the principal terminal electron-accepting processes (TEAPs) during biostimulation and the consequent biogeochemical impacts to the subsurface receiving environment. Integrating abiotic chemistry with the microbially mediated TEAPs in the reaction network brings into play geochemical observations (e.g., pH, alkalinity, redox potential, major ions, and secondary minerals) that the reactive transport model must recognize. These additional constraints provide for a more systematic and mechanistic interpretation of the field behaviors during biostimulation. The reaction network specification developed for the 2002 biostimulation field experiment was successfully applied without additional calibration to the 2003 and 2007 field experiments. The robustness of the model specification is significant in that 1) the 2003 biostimulation field experiment was performed with 3 times higher acetate concentrations than the previous biostimulation in the same field plot (i.e., the 2002 experiment), and 2) the 2007 field experiment was performed in a new unperturbed plot on the same site. The biogeochemical reactive transport simulations accounted for four TEAPs, two distinct functional microbial populations, two pools of bioavailable Fe(III) minerals (iron oxides and phyllosilicate iron), uranium aqueous and surface complexation, mineral precipitation, and dissolution. The conceptual model for bioavailable iron reflects recent laboratory studies with sediments from the Old Rifle Uranium Mill Tailings Remedial Action (UMTRA) site that demonstrated that the bulk (~90%) of Fe(III) bioreduction is associated with the phyllosilicates rather than the iron oxides

  2. Electrical Resistivity Monitoring of Voids: Results of Dynamic Modeling Experiments

    NASA Astrophysics Data System (ADS)

    Lane, J. W.; Day-Lewis, F. D.; Singha, K.

    2006-05-01

    Remote, non-invasive detection of voids is a challenging problem for environmental and engineering investigations in karst terrain. Many geophysical methods including gravity, electrical, electromagnetic, magnetic, and seismic have potential to detect voids in the subsurface; lithologic heterogeneity and method- specific sources of noise, however, can mask the geophysical signatures of voids. New developments in automated, autonomous geophysical monitoring technology now allow for void detection using differential geophysics. We propose automated collection of electrical resistivity measurements over time. This dynamic approach exploits changes in subsurface electrical properties related to void growth or water-table fluctuation in order to detect voids that would be difficult or impossible to detect using static imaging approaches. We use a series of synthetic modeling experiments to demonstrate the potential of difference electrical resistivity tomography for finding (1) voids that develop vertically upward under a survey line (e.g., an incipient sinkhole); (2) voids that develop horizontally toward a survey line (e.g., a tunnel); and (3) voids that are influenced by changing hydrologic conditions (e.g., void saturation and draining). Synthetic datasets are simulated with a 3D finite-element model, but the inversion assumes a 2D forward model to mimic conventional practice. The results of the synthetic modeling experiments provide insights useful for planning and implementing field-scale monitoring experiments using electrical methods.

  3. Simulation model for the closed plant experiment facility of CEEF

    NASA Astrophysics Data System (ADS)

    Abe, Koichi; Ishikawa, Yoshio; Kibe, Seishiro; Nitta, Keiji

    The Closed Ecology Experiment Facilities (CEEF) is a testbed for Controlled Ecological Life Support Systems (CELSS) investigations. CEEF including the physico-chemical material regenerative system has been constructed for the experiments of material circulation among plants, breeding animals and crew of CEEF. Because CEEF is a complex system, an appropriate schedule for the operation must be prepared in advance. The CEEF behavioral Prediction System, CPS, that will help to confirm the operation schedule, is under development. CPS will simulate CEEFs behavior with data (conditions of equipments, quantity of materials in tanks, etc.) of CEEF and an operation schedule that will be made by the operation team everyday, before the schedule will be carried out. The result of the simulation will show whether the operation schedule is appropriate or not. In order to realize CPS, models of the simulation program that is installed in CPS must mirror the real facilities of CEEF. For the first step of development, a flexible algorithm of the simulation program was investigated. The next step was development of a replicate simulation model of the material circulation system for the Closed Plant Experiment Facility (CPEF) that is a part of CEEF. All the parts of a real material circulation system for CPEF are connected together and work as a complex mechanism. In the simulation model, the system was separated into 38 units according to its operational segmentation. In order to develop each model for its corresponding unit, specifications for the model were fixed based on the specifications of the real part. These models were put into a simulation model for the system.

  4. Modeling HEDLA magnetic field generation experiments on laser facilities

    NASA Astrophysics Data System (ADS)

    Fatenejad, M.; Bell, A. R.; Benuzzi-Mounaix, A.; Crowston, R.; Drake, R. P.; Flocke, N.; Gregori, G.; Koenig, M.; Krauland, C.; Lamb, D.; Lee, D.; Marques, J. R.; Meinecke, J.; Miniati, F.; Murphy, C. D.; Park, H.-S.; Pelka, A.; Ravasio, A.; Remington, B.; Reville, B.; Scopatz, A.; Tzeferacos, P.; Weide, K.; Woolsey, N.; Young, R.; Yurchak, R.

    2013-03-01

    The Flash Center is engaged in a collaboration to simulate laser driven experiments aimed at understanding the generation and amplification of cosmological magnetic fields using the FLASH code. In these experiments a laser illuminates a solid plastic or graphite target launching an asymmetric blast wave into a chamber which contains either Helium or Argon at millibar pressures. Induction coils placed several centimeters away from the target detect large scale magnetic fields on the order of tens to hundreds of Gauss. The time dependence of the magnetic field is consistent with generation via the Biermann battery mechanism near the blast wave. Attempts to perform simulations of these experiments using the FLASH code have uncovered previously unreported numerical difficulties in modeling the Biermann battery mechanism near shock waves which can lead to the production of large non-physical magnetic fields. We report on these difficulties and offer a potential solution.

  5. Modelling of binary alloy solidification in the MEPHISTO experiment

    NASA Astrophysics Data System (ADS)

    Leonardi, Eddie; de Vahl Davis, Graham; Timchenko, Victoria; Chen, Peter; Abbaschian, Reza

    2004-05-01

    A modified enthalpy method was used to numerically model experiments on solidification of a bismuth-tin alloy which were performed during the 1997 flight of the MEPHISTO-4 experiment on the US Space Shuttle Columbia. This modified enthalpy method was incorporated into an in-house code SOLCON and a commercial CFD code CFX; Soret effect was taken into account by including an additional thermo-diffusion term into the solute transport equation and the effects of thermal and solutal convection in the microgravity environment and of concentration-dependent melting temperature on the phase change processes were also included. In this paper an overview of the results obtained as part of MEPHISTO project is presented. The numerical solutions are compared with actual microprobe results obtained from the MEPHISTO experiment. To cite this article: E. Leonardi et al., C. R. Mecanique 332 (2004).

  6. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  7. Fracture Mechanics Modelling of an In Situ Concrete Spalling Experiment

    NASA Astrophysics Data System (ADS)

    Siren, Topias; Uotinen, Lauri; Rinne, Mikael; Shen, Baotang

    2015-07-01

    During the operation of nuclear waste disposal facilities, some sprayed concrete reinforced underground spaces will be in use for approximately 100 years. During this time of use, the local stress regime will be altered by the radioactive decay heat. The change in the stress state will impose high demands on sprayed concrete, as it may suffer stress damage or lose its adhesion to the rock surface. It is also unclear what kind of support pressure the sprayed concrete layer will apply to the rock. To investigate this, an in situ experiment is planned in the ONKALO underground rock characterization facility at Olkiluoto, Finland. A vertical experimental hole will be concreted, and the surrounding rock mass will be instrumented with heat sources, in order to simulate an increase in the surrounding stress field. The experiment is instrumented with an acoustic emission system for the observation of rock failure and temperature, as well as strain gauges to observe the thermo-mechanical interactive behaviour of the concrete and rock at several levels, in both rock and concrete. A thermo-mechanical fracture mechanics study is necessary for the prediction of the damage before the experiment, in order to plan the experiment and instrumentation, and for generating a proper prediction/outcome study due to the special nature of the in situ experiment. The prediction of acoustic emission patterns is made by Fracod 2D and the model later compared to the actual observed acoustic emissions. The fracture mechanics model will be compared to a COMSOL Multiphysics 3D model to study the geometrical effects along the hole axis.

  8. Stopping Power in Dense Plasmas: Models, Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Grabowski, Paul; Fichtl, Chris; Graziani, Frank; Hazi, Andrew; Murillo, Michael; Sheperd, Ronnie; Surh, Mike; Cimarron Collaboration

    2011-10-01

    Our goal is to conclusively determine the minimal model for stopping power in dense plasmas via a three-pronged theoretical, simulation, and experimental program. Stopping power in dense plasma is important for ion beam heating of targets (e.g., fast ignition) and alpha particle energy deposition in inertial confinement fusion targets. We wish to minimize our uncertainties in the stopping power by comparing a wide range of theoretical approaches to both detailed molecular dynamics (MD) simulations and experiments. The largest uncertainties occur for slow-to-moderate velocity projectiles, dense plasmas, and highly charged projectiles. We have performed MD simulations of a classical, one component plasma to reveal where there are weaknesses in our kinetic theories of stopping power, over a wide range of plasma conditions. We have also performed stopping experiments of protons in heated warm dense carbon for validation of such models, including MD calculations, of realistic plasmas for which bound contributions are important.

  9. Detecting Domain Walls of Axionlike Models Using Terrestrial Experiments

    NASA Astrophysics Data System (ADS)

    Pospelov, M.; Pustelny, S.; Ledbetter, M. P.; Kimball, D. F. Jackson; Gawlik, W.; Budker, D.

    2013-01-01

    Stable topological defects of light (pseudo)scalar fields can contribute to the Universe’s dark energy and dark matter. Currently, the combination of gravitational and cosmological constraints provides the best limits on such a possibility. We take an example of domain walls generated by an axionlike field with a coupling to the spins of standard-model particles and show that, if the galactic environment contains a network of such walls, terrestrial experiments aimed at the detection of wall-crossing events are realistic. In particular, a geographically separated but time-synchronized network of sensitive atomic magnetometers can detect a wall crossing and probe a range of model parameters currently unconstrained by astrophysical observations and gravitational experiments.

  10. Integrated modeling of LHCD experiment on Alcator C-Mod

    SciTech Connect

    Shiraiwa, S.; Bonoli, P.; Parker, R.; Wallace, G.

    2014-02-12

    Recent progress in integrating the latest LHCD model based on ray-tracing into the Integrated Plasma Simulator (IPS) is reported. IPS, a python based framework for time dependent tokamak simulation, was expanded recently to incorporate LHCD simulation using GENRAY/CQL3D (ray-tracing/3D Fokker-Planck package). Using GENRAY/CQL3D in the IPS framework, it becomes possible to include parasitic LHCD power loss near the plasma edge, which was found to be important in experiments particularly at high density as expected on reactors. Moreover, it allows for evolving the velocity distribution function in 4 D (ν{sub ∥}, ν⊥, r/a, t) space self-consistently. In order to validate the code, IPS is applied to LHCD experiments on Alctor C-Mod. In this paper, a LHCD experiment performed at the density of n{sub e}∼0.5×10{sup 20}m{sup −3} where good LHCD efficiency and the development of internal transport barrier (ITB) was reported, is modelled in a predictive mode and the result is compared with experiment.

  11. Integrated modeling of LHCD experiment on Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Shiraiwa, S.; Bonoli, P.; Parker, R.; Wallace, G.

    2014-02-01

    Recent progress in integrating the latest LHCD model based on ray-tracing into the Integrated Plasma Simulator (IPS) is reported. IPS, a python based framework for time dependent tokamak simulation, was expanded recently to incorporate LHCD simulation using GENRAY/CQL3D (ray-tracing/3D Fokker-Planck package). Using GENRAY/CQL3D in the IPS framework, it becomes possible to include parasitic LHCD power loss near the plasma edge, which was found to be important in experiments particularly at high density as expected on reactors. Moreover, it allows for evolving the velocity distribution function in 4 D (ν∥, ν⊥, r/a, t) space self-consistently. In order to validate the code, IPS is applied to LHCD experiments on Alctor C-Mod. In this paper, a LHCD experiment performed at the density of ne˜0.5×1020m-3 where good LHCD efficiency and the development of internal transport barrier (ITB) was reported, is modelled in a predictive mode and the result is compared with experiment.

  12. [Freshwater fish freshness on-line detection method based on near-infrared spectroscopy].

    PubMed

    Huang, Tao; Li, Xiao-Yu; Peng, Yi; Tao, Hai-Long; Li, Peng; Xiong, Shan-Bai

    2014-10-01

    In the present study, the near infrared spectrum of freshwater fish was used to detect the freshness on line, and the near infrared spectra on-line acquisition device was built to get the fish spectrum. In the process of spectrum acquisition, experiment samples move at a speed of 0.5 m · s(-1), the near-infrared diffuse reflection spectrum (900-2,500 nm) could be got for the next analyzing, and SVM was used to build on-line detection model. Sample set partitioning based on joint X-Y distances algo- rithm (SPXY) was used to divide sample set, there were 111 samples in calibration set (57 fresh samples and 54 bad samples), and 37 samples in test set (19 fresh samples and 18 bad samples). Seven spectral preprocessing methods were utilized to prepro- cess the spectrum, and the influences of different methods were compared. Model results indicated that first derivative (FD) with autoscale was the best preprocessing method, the model recognition rate of calibration set was 97.96%, and the recognition rate of test set was 95.92%. In order to improve the modeling speed, it is necessary to optimize the spectra variables. Therefore genetic algorithm (GA), successive projection algorithm (SPA) and competitive adaptive reweighed sampling (CARS) were adopted to select characteristic variables respectively. Finally CARS was proved to be the optimal variable selection method, 10 characteristic wavelengths were selected to develop SVM model, recognition rate of calibration set reached 100%, and recognition rate of test set was 93.88%. The research provided technical reference for freshwater fish freshness online detection.

  13. Analogue experiments as benchmarks for models of lava flow emplacement

    NASA Astrophysics Data System (ADS)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide

  14. Reference analysis of the signal + background model in counting experiments

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2012-01-01

    The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.

  15. An energetic model for macromolecules unfolding in stretching experiments.

    PubMed

    De Tommasi, D; Millardi, N; Puglisi, G; Saccomandi, G

    2013-11-01

    We propose a simple approach, based on the minimization of the total (entropic plus unfolding) energy of a two-state system, to describe the unfolding of multi-domain macromolecules (proteins, silks, polysaccharides, nanopolymers). The model is fully analytical and enlightens the role of the different energetic components regulating the unfolding evolution. As an explicit example, we compare the analytical results with a titin atomic force microscopy stretch-induced unfolding experiment showing the ability of the model to quantitatively reproduce the experimental behaviour. In the thermodynamic limit, the sawtooth force-elongation unfolding curve degenerates to a constant force unfolding plateau.

  16. Explanatory Models and Illness Experience of People Living with HIV.

    PubMed

    Laws, M Barton

    2016-09-01

    Research into explanatory models of disease and illness typically explores people's conceptual understanding, and emphasizes differences between patient and provider models. However, the explanatory models framework of etiology, time and mode of onset of symptoms, pathophysiology, course of sickness, and treatment is built on categories characteristic of biomedical understanding. It is unclear how well these map onto people's lived experience of illness, and to the extent they do, how they translate. Scholars have previously studied the experience of people living with HIV through the lenses of stigma and identity theory. Here, through in-depth qualitative interviews with 32 people living with HIV in the northeast United States, we explored the experience and meanings of living with HIV more broadly using the explanatory models framework. We found that identity reformation is a major challenge for most people following the HIV diagnosis, and can be understood as a central component of the concept of course of illness. Salient etiological explanations are not biological, but rather social, such as betrayal, or living in a specific cultural milieu, and often self-evaluative. Given that symptoms can now largely be avoided through adherence to treatment, they are most frequently described in terms of observation of others who have not been adherent, or the resolution of symptoms following treatment. The category of pathophysiology is not ordinarily very relevant to the illness experience, as few respondents have any understanding of the mechanism of pathogenesis in HIV, nor much interest in it. Treatment has various personal meanings, both positive and negative, often profound. For people to engage successfully in treatment and live successfully with HIV, mechanistic explanation is of little significance. Rather, positive psychological integration of health promoting behaviors is of central importance.

  17. Explanatory Models and Illness Experience of People Living with HIV

    PubMed Central

    2016-01-01

    Research into explanatory models of disease and illness typically explores people’s conceptual understanding, and emphasizes differences between patient and provider models. However, the explanatory models framework of etiology, time and mode of onset of symptoms, pathophysiology, course of sickness, and treatment is built on categories characteristic of biomedical understanding. It is unclear how well these map onto people’s lived experience of illness, and to the extent they do, how they translate. Scholars have previously studied the experience of people living with HIV through the lenses of stigma and identity theory. Here, through in-depth qualitative interviews with 32 people living with HIV in the northeast United States, we explored the experience and meanings of living with HIV more broadly using the explanatory models framework. We found that identity reformation is a major challenge for most people following the HIV diagnosis, and can be understood as a central component of the concept of course of illness. Salient etiological explanations are not biological, but rather social, such as betrayal, or living in a specific cultural milieu, and often self-evaluative. Given that symptoms can now largely be avoided through adherence to treatment, they are most frequently described in terms of observation of others who have not been adherent, or the resolution of symptoms following treatment. The category of pathophysiology is not ordinarily very relevant to the illness experience, as few respondents have any understanding of the mechanism of pathogenesis in HIV, nor much interest in it. Treatment has various personal meanings, both positive and negative, often profound. For people to engage successfully in treatment and live successfully with HIV, mechanistic explanation is of little significance. Rather, positive psychological integration of health promoting behaviors is of central importance. PMID:26971285

  18. Numerical modeling of injection experiments at The Geysers

    SciTech Connect

    Pruess, K.; Enedy, S.

    1993-01-01

    Data from injection experiments in the southeast Geysers are presented that show strong interference (both negative and positive) with a neighboring production well. Conceptual and numerical models are developed that explain the negative interference (decline of production rate) in terms of heat transfer limitations and water-vapor relative permeability effects. Recovery and over-recovery following injection shut-in are attributed to boiling of injected fluid, with heat of vaporization provided by the reservoir rocks.

  19. Numerical modeling of injection experiments at The Geysers

    SciTech Connect

    Pruess, Karsten; Enedy, Steve

    1993-01-28

    Data from injection experiments in the southeast Geysers are presented that show strong interference (both negative and positive) with a neighboring production well. Conceptual and numerical models are developed that explain the negative interference (decline of production rate) in terms of heat transfer limitations and water-vapor relative permeability effects. Recovery and overrecovery following injection shut-in are attributed to boiling of injected fluid, with heat of vaporization provided by the reservoir rocks.

  20. Numerical Modeling of a Magnetic Flux Compression Experiment

    NASA Astrophysics Data System (ADS)

    Makhin, Volodymyr; Bauer, Bruno S.; Awe, Thomas J.; Fuelling, Stephan; Goodrich, Tasha; Lindemuth, Irvin R.; Siemon, Richard E.; Garanin, Sergei F.

    2007-06-01

    A possible plasma target for Magnetized Target Fusion (MTF) is a stable diffuse z-pinch in a toroidal cavity, like that in MAGO experiments. To examine key phenomena of such MTF systems, a magnetic flux compression experiment with this geometry is under design. The experiment is modeled with 3 codes: a slug model, the 1D Lagrangian RAVEN code, and the 1D or 2D Eulerian Magneto-Hydro-Radiative-Dynamics-Research (MHRDR) MHD simulation. Even without injection of plasma, high- Z wall plasma is generated by eddy-current Ohmic heating from MG fields. A significant fraction of the available liner kinetic energy goes into Ohmic heating and compression of liner and central-core material. Despite these losses, efficiency of liner compression, expressed as compressed magnetic energy relative to liner kinetic energy, can be close to 50%. With initial fluctuations (1%) imposed on the liner and central conductor density, 2D modeling manifests liner intrusions, caused by the m = 0 Rayleigh-Taylor instability during liner deceleration, and central conductor distortions, caused by the m = 0 curvature-driven MHD instability. At many locations, these modes reduce the gap between the liner and the central core by about a factor of two, to of order 1 mm, at the time of peak magnetic field.

  1. Modeling of high power ICRF heating experiments on TFTR

    SciTech Connect

    Phillips, C.K.; Wilson, J.R.; Bell, M.; Fredrickson, E.; Hosea, J.C.; Majeski, R.; Ramsey, A.; Rogers, J.H.; Schilling, G.; Skinner, C.; Stevens, J.E.; Taylor, G.; Wong, K.L.; Khudaleev, A.; Petrov, M.P.; Murakami, M.

    1993-04-01

    Over the past two years, ICRF heating experiments have been performed on TFTR in the hydrogen minority heating regime with power levels reaching 11.2 MW in helium-4 majority plasmas and 8.4 MW in deuterium majority plasmas. For these power levels, the minority hydrogen ions, which comprise typically less than 10% of the total electron density, evolve into la very energetic, anisotropic non-Maxwellian distribution. Indeed, the excess perpendicular stored energy in these plasmas associated with the energetic minority tail ions is often as high as 25% of the total stored energy, as inferred from magnetic measurements. Enhanced losses of 0.5 MeV protons consistent with the presence of an energetic hydrogen component have also been observed. In ICRF heating experiments on JET at comparable and higher power levels and with similar parameters, it has been suggested that finite banana width effects have a noticeable effect on the ICRF power deposition. In particular, models indicate that finite orbit width effects lead to a reduction in the total stored energy and of the tail energy in the center of the plasma, relative to that predicted by the zero banana width models. In this paper, detailed comparisons between the calculated ICRF power deposition profiles and experimentally measured quantities will be presented which indicate that significant deviations from the zero banana width models occur even for modest power levels (P{sub rf} {approximately} 6 MW) in the TFTR experiments.

  2. Modeling of high power ICRF heating experiments on TFTR

    SciTech Connect

    Phillips, C.K.; Wilson, J.R.; Bell, M.; Fredrickson, E.; Hosea, J.C.; Majeski, R.; Ramsey, A.; Rogers, J.H.; Schilling, G.; Skinner, C.; Stevens, J.E.; Taylor, G.; Wong, K.L. . Plasma Physics Lab.); Khudaleev, A.; Petrov, M.P. ); Murakami, M. )

    1993-01-01

    Over the past two years, ICRF heating experiments have been performed on TFTR in the hydrogen minority heating regime with power levels reaching 11.2 MW in helium-4 majority plasmas and 8.4 MW in deuterium majority plasmas. For these power levels, the minority hydrogen ions, which comprise typically less than 10% of the total electron density, evolve into la very energetic, anisotropic non-Maxwellian distribution. Indeed, the excess perpendicular stored energy in these plasmas associated with the energetic minority tail ions is often as high as 25% of the total stored energy, as inferred from magnetic measurements. Enhanced losses of 0.5 MeV protons consistent with the presence of an energetic hydrogen component have also been observed. In ICRF heating experiments on JET at comparable and higher power levels and with similar parameters, it has been suggested that finite banana width effects have a noticeable effect on the ICRF power deposition. In particular, models indicate that finite orbit width effects lead to a reduction in the total stored energy and of the tail energy in the center of the plasma, relative to that predicted by the zero banana width models. In this paper, detailed comparisons between the calculated ICRF power deposition profiles and experimentally measured quantities will be presented which indicate that significant deviations from the zero banana width models occur even for modest power levels (P[sub rf] [approximately] 6 MW) in the TFTR experiments.

  3. Experiments and Computational Modeling of Pulverized-Clak Ignition.

    SciTech Connect

    Chen, J.C.

    1997-08-01

    Under typical conditions of pulverized-coal combustion, which is characterized by fine particles heated at very high rates, there is currently a lack of certainty regarding the ignition mechanism of bituminous and lower rank coals. It is unclear whether ignition occurs first at the particle-oxygen interface (heterogeneous ignition) or if it occurs in the gas phase due to ignition of the devolatilization products (homogeneous ignition). Furthermore, there have been no previous studies aimed at determining the dependence of the ignition mechanism on variations in experimental conditions, such as particle size, oxygen concentration, and heating rate. Finally, there is a need to improve current mathematical models of ignition to realistically and accurately depict the particle-to-particle variations that exist within a coal sample. Such a model is needed to extract useful reaction parameters from ignition studies, and to interpret ignition data in a more meaningful way. We propose to examine fundamental aspects of coal ignition through (1) experiments to determine the ignition mechanism of various coals by direct observation, and (2) modeling of the ignition process to derive rate constants and to provide a more insightful interpretation of data from ignition experiments. We propose to use a novel laser-based ignition experiment to achieve our objectives.

  4. Final Report: "Collaborative Project. Understanding the Chemical Processes That Affect Growth Rates of Freshly Nucleated Particles"

    SciTech Connect

    Smith, James N.; McMurry, Peter H.

    2015-11-12

    This final technical report describes our research activities that have, as the ultimate goal, the development of a model that explains growth rates of freshly nucleated particles. The research activities, which combine field observations with laboratory experiments, explore the relationship between concentrations of gas-phase species that contribute to growth and the rates at which those species are taken up. We also describe measurements of the chemical composition of freshly nucleated particles in a variety of locales, as well as properties (especially hygroscopicity) that influence their effects on climate. Our measurements include a self-organized, DOE-ARM funded project at the Southern Great Plains site, the New Particle Formation Study (NPFS), which took place during spring 2013. NPFS data are available to the research community on the ARM data archive, providing a unique suite observations of trace gas and aerosols that are associated with the formation and growth of atmospheric aerosol particles.

  5. Theoretical and experimental studies on low-temperature adsorption drying of fresh ginger

    NASA Astrophysics Data System (ADS)

    Yang, Xiaoxi; Xu, Wei; Ding, Jing; Zhao, Yi

    2006-03-01

    The working principle of low-temperature adsorption drying and the advantages of its application for biological materials drying were introduced in this paper. By using fresh ginger as the drying material, the effects of temperature and relative humidity on its drying characteristics were examined. The results show that the drying rate increases with the temperature increasing or the humidity decreasing. The drying time to the equilibrium is almost the same under different humidity conditions, but low equilibrium moisture content can be acquired under low humidity. The shrinkage characteristics of fresh ginger were also studied. The change of its surface appearance during the drying process was characterized by the new Charged Coupled Device (CCD) and the Environmental Scanning Electron Microscopy (ESEM) technique. A mathematical model of drying dynamics was set up according to the experiments.

  6. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  7. Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments

    NASA Astrophysics Data System (ADS)

    Vezer, M. A.

    2010-12-01

    Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between

  8. Opinion Formation by Social Influence: From Experiments to Modeling

    PubMed Central

    Chacoma, Andrés; Zanette, Damián H.

    2015-01-01

    Predicting different forms of collective behavior in human populations, as the outcome of individual attitudes and their mutual influence, is a question of major interest in social sciences. In particular, processes of opinion formation have been theoretically modeled on the basis of a formal similarity with the dynamics of certain physical systems, giving rise to an extensive collection of mathematical models amenable to numerical simulation or even to exact solution. Empirical ground for these models is however largely missing, which confine them to the level of mere metaphors of the real phenomena they aim at explaining. In this paper we present results of an experiment which quantifies the change in the opinions given by a subject on a set of specific matters under the influence of others. The setup is a variant of a recently proposed experiment, where the subject’s confidence on his or her opinion was evaluated as well. In our realization, which records the quantitative answers of 85 subjects to 20 questions before and after an influence event, the focus is put on characterizing the change in answers and confidence induced by such influence. Similarities and differences with the previous version of the experiment are highlighted. We find that confidence changes are to a large extent independent of any other recorded quantity, while opinion changes are strongly modulated by the original confidence. On the other hand, opinion changes are not influenced by the initial difference with the reference opinion. The typical time scales on which opinion varies are moreover substantially longer than those of confidence change. Experimental results are then used to estimate parameters for a dynamical agent-based model of opinion formation in a large population. In the context of the model, we study the convergence to full consensus and the effect of opinion leaders on the collective distribution of opinions. PMID:26517825

  9. Opinion Formation by Social Influence: From Experiments to Modeling.

    PubMed

    Chacoma, Andrés; Zanette, Damián H

    2015-01-01

    Predicting different forms of collective behavior in human populations, as the outcome of individual attitudes and their mutual influence, is a question of major interest in social sciences. In particular, processes of opinion formation have been theoretically modeled on the basis of a formal similarity with the dynamics of certain physical systems, giving rise to an extensive collection of mathematical models amenable to numerical simulation or even to exact solution. Empirical ground for these models is however largely missing, which confine them to the level of mere metaphors of the real phenomena they aim at explaining. In this paper we present results of an experiment which quantifies the change in the opinions given by a subject on a set of specific matters under the influence of others. The setup is a variant of a recently proposed experiment, where the subject's confidence on his or her opinion was evaluated as well. In our realization, which records the quantitative answers of 85 subjects to 20 questions before and after an influence event, the focus is put on characterizing the change in answers and confidence induced by such influence. Similarities and differences with the previous version of the experiment are highlighted. We find that confidence changes are to a large extent independent of any other recorded quantity, while opinion changes are strongly modulated by the original confidence. On the other hand, opinion changes are not influenced by the initial difference with the reference opinion. The typical time scales on which opinion varies are moreover substantially longer than those of confidence change. Experimental results are then used to estimate parameters for a dynamical agent-based model of opinion formation in a large population. In the context of the model, we study the convergence to full consensus and the effect of opinion leaders on the collective distribution of opinions.

  10. Multi-injector modeling of transverse combustion instability experiments

    NASA Astrophysics Data System (ADS)

    Shipley, Kevin J.

    Concurrent simulations and experiments are used to study combustion instabilities in a multiple injector element combustion chamber. The experiments employ a linear array of seven coaxial injector elements positioned atop a rectangular chamber. Different levels of instability are driven in the combustor by varying the operating and geometry parameters of the outer driving injector elements located near the chamber end-walls. The objectives of the study are to apply a reduced three-injector model to generate a computational test bed for the evaluation of injector response to transverse instability, to apply a full seven-injector model to investigate the inter-element coupling between injectors in response to transverse instability, and to further develop this integrated approach as a key element in a predictive methodology that relies heavily on subscale test and simulation. To measure the effects of the transverse wave on a central study injector element two opposing windows are placed in the chamber to allow optical access. The chamber is extensively instrumented with high-frequency pressure transducers. High-fidelity computational fluid dynamics simulations are used to model the experiment. Specifically three-dimensional, detached eddy simulations (DES) are used. Two computational approaches are investigated. The first approach models the combustor with three center injectors and forces transverse waves in the chamber with a wall velocity function at the chamber side walls. Different levels of pressure oscillation amplitudes are possible by varying the amplitude of the forcing function. The purpose of this method is to focus on the combustion response of the study element. In the second approach, all seven injectors are modeled and self-excited combustion instability is achieved. This realistic model of the chamber allows the study of inter-element flow dynamics, e.g., how the resonant motions in the injector tubes are coupled through the transverse pressure

  11. Rates of heat exchange in largemouth bass: experiment and model

    SciTech Connect

    Weller, D.E.; Anderson, D.J.; DeAngelis, D.L.; Coutant, C.C.

    1984-01-01

    A mathematical model of body-core temperature change in fish was derived by modifying Newton's law of cooling to include an initial time lag in temperature adjustment. This model was tested with data from largemouth bass (Micropterus salmoides) subjected to step changes in ambient temperature and to more complex ambient regimes. Nonlinear least squares was used to fit model parameters k (min/sup -1/) and L (initial lag time in minutes) to time series temperature data from step-change experiments. Temperature change halftimes (t/sub 1/2/, in minutes) were calculated from k and L. Significant differences (P < .05) were found in these parameters between warming and cooling conditions and between live and dead fish. Statistically significant regressions were developed relating k and t/sub 1/2/ to weight and L to length. Estimates of k and L from the step-change experiments were used with a computer solution of the model to stimulate body temperature response to continuously varying ambient regimes. These simulations explained between 52% and 99% of the variation in core temperature, with absolute errors in prediction ranging between 0 and 0.61 C when ambient temperature was varied over 4 C. 14 references, 6 figures, 4 tables.

  12. Reverse draw solute permeation in forward osmosis: modeling and experiments.

    PubMed

    Phillip, William A; Yong, Jui Shan; Elimelech, Menachem

    2010-07-01

    Osmotically driven membrane processes are an emerging set of technologies that show promise in water and wastewater treatment, desalination, and power generation. The effective operation of these systems requires that the reverse flux of draw solute from the draw solution into the feed solution be minimized. A model was developed that describes the reverse permeation of draw solution across an asymmetric membrane in forward osmosis operation. Experiments were carried out to validate the model predictions with a highly soluble salt (NaCl) as a draw solution and a cellulose acetate membrane designed for forward osmosis. Using independently determined membrane transport coefficients, strong agreement between the model predictions and experimental results was observed. Further analysis shows that the reverse flux selectivity, the ratio of the forward water flux to the reverse solute flux, is a key parameter in the design of osmotically driven membrane processes. The model predictions and experiments demonstrate that this parameter is independent of the draw solution concentration and the structure of the membrane support layer. The value of the reverse flux selectivity is determined solely by the selectivity of the membrane active layer.

  13. RANS Modeling of Benchmark Shockwave / Boundary Layer Interaction Experiments

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nick; Vyas, Manan; Yoder, Dennis

    2010-01-01

    This presentation summarizes the computations of a set of shock wave / turbulent boundary layer interaction (SWTBLI) test cases using the Wind-US code, as part of the 2010 American Institute of Aeronautics and Astronautics (AIAA) shock / boundary layer interaction workshop. The experiments involve supersonic flows in wind tunnels with a shock generator that directs an oblique shock wave toward the boundary layer along one of the walls of the wind tunnel. The Wind-US calculations utilized structured grid computations performed in Reynolds-averaged Navier-Stokes mode. Three turbulence models were investigated: the Spalart-Allmaras one-equation model, the Menter Shear Stress Transport wavenumber-angular frequency two-equation model, and an explicit algebraic stress wavenumber-angular frequency formulation. Effects of grid resolution and upwinding scheme were also considered. The results from the CFD calculations are compared to particle image velocimetry (PIV) data from the experiments. As expected, turbulence model effects dominated the accuracy of the solutions with upwinding scheme selection indicating minimal effects.!

  14. Integrated Experiment and Modeling of Insensitive High Explosives

    NASA Astrophysics Data System (ADS)

    Stewart, D. Scott; Lambert, David E.; Yoo, Sunhee; Lieber, Mark; Holman, Steven

    2009-12-01

    New design paradigms for insensitive high explosives are being sought for use in munitions applications that require enhanced safety, reliability and performance. We describe recent work of our group that uses an integrated approach to develop predictive models, guided by experiments. Insensitive explosive can have relatively longer detonation reaction zones and slower reaction rates than their sensitive counterparts. We employ reactive flow models that are constrained by detonation shock dynamics (DSD) to pose candidate predictive models. We discuss the variation of the pressure dependent reaction rate exponent and reaction order on the length of the supporting reaction zone, the detonation velocity curvature relation, the computed critical energy required for initiation, the relation between the diameter effect curve and the corresponding normal detonation velocity curvature relation.

  15. Multi-scale modelling for HEDP experiments on Orion

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  16. Beyond Performance: A Motivational Experiences Model of Stereotype Threat

    PubMed Central

    Thoman, Dustin B.; Smith, Jessi L.; Brown, Elizabeth R.; Chase, Justin; Lee, Joo Young K.

    2013-01-01

    The contributing role of stereotype threat (ST) to learning and performance decrements for stigmatized students in highly evaluative situations has been vastly documented and is now widely known by educators and policy makers. However, recent research illustrates that underrepresented and stigmatized students’ academic and career motivations are influenced by ST more broadly, particularly through influences on achievement orientations, sense of belonging, and intrinsic motivation. Such a focus moves conceptualizations of ST effects in education beyond the influence on a student’s performance, skill level, and feelings of self-efficacy per se to experiencing greater belonging uncertainty and lower interest in stereotyped tasks and domains. These negative experiences are associated with important outcomes such as decreased persistence and domain identification, even among students who are high in achievement motivation. In this vein, we present and review support for the Motivational Experience Model of ST, a self-regulatory model framework for integrating research on ST, achievement goals, sense of belonging, and intrinsic motivation to make predictions for how stigmatized students’ motivational experiences are maintained or disrupted, particularly over long periods of time. PMID:23894223

  17. Medical students' emotional development in early clinical experience: a model.

    PubMed

    Helmich, Esther; Bolhuis, Sanneke; Laan, Roland; Dornan, Tim; Koopmans, Raymond

    2014-08-01

    Dealing with emotions is a critical feature of professional behaviour. There are no comprehensive theoretical models, however, explaining how medical students learn about emotions. We aimed to explore factors affecting their emotions and how they learn to deal with emotions in themselves and others. During a first-year nursing attachment in hospitals and nursing homes, students wrote daily about their most impressive experiences, explicitly reporting what they felt, thought, and did. In a subsequent interview, they discussed those experiences in greater detail. Following a grounded theory approach, we conducted a constant comparative analysis, collecting and then interpreting data, and allowing the interpretation to inform subsequent data collection. Impressive experiences set up tensions, which gave rise to strong emotions. We identified four 'axes' along which tensions were experienced: 'idealism versus reality', 'critical distance versus adaptation', 'involvement versus detachment' and 'feeling versus displaying'. We found many factors, which influenced how respondents relieved those tensions. Their personal attributes and social relationships both inside and outside the medical community were important ones. Respondents' positions along the different dimensions, as determined by the balance between attributes and tensions, shaped their learning outcomes. Medical students' emotional development occurs through active participation in medical practice and having impressive experiences within relationships with patients and others on wards. Tensions along four dimensions give rise to strong emotions. Gaining insight into the many conditions that influence students' learning about emotions might support educators and supervisors in fostering medical students' emotional and professional development. PMID:23949724

  18. Dynamic crack initiation toughness : experiments and peridynamic modeling.

    SciTech Connect

    Foster, John T.

    2009-10-01

    This is a dissertation on research conducted studying the dynamic crack initiation toughness of a 4340 steel. Researchers have been conducting experimental testing of dynamic crack initiation toughness, K{sub Ic}, for many years, using many experimental techniques with vastly different trends in the results when reporting K{sub Ic} as a function of loading rate. The dissertation describes a novel experimental technique for measuring K{sub Ic} in metals using the Kolsky bar. The method borrows from improvements made in recent years in traditional Kolsky bar testing by using pulse shaping techniques to ensure a constant loading rate applied to the sample before crack initiation. Dynamic crack initiation measurements were reported on a 4340 steel at two different loading rates. The steel was shown to exhibit a rate dependence, with the recorded values of K{sub Ic} being much higher at the higher loading rate. Using the knowledge of this rate dependence as a motivation in attempting to model the fracture events, a viscoplastic constitutive model was implemented into a peridynamic computational mechanics code. Peridynamics is a newly developed theory in solid mechanics that replaces the classical partial differential equations of motion with integral-differential equations which do not require the existence of spatial derivatives in the displacement field. This allows for the straightforward modeling of unguided crack initiation and growth. To date, peridynamic implementations have used severely restricted constitutive models. This research represents the first implementation of a complex material model and its validation. After showing results comparing deformations to experimental Taylor anvil impact for the viscoplastic material model, a novel failure criterion is introduced to model the dynamic crack initiation toughness experiments. The failure model is based on an energy criterion and uses the K{sub Ic} values recorded experimentally as an input. The failure model

  19. Insight Into Sustainability of Fresh Water

    NASA Astrophysics Data System (ADS)

    Simonovic, S. P.

    2002-05-01

    Global modeling often assumes that water is not an issue on the macro scale. WorldWater system dynamics model has been developed for modeling global world water balance and capturing the dynamic character of main variables affecting water availability and use in the future. In spite of not being a novel approach, system dynamics offers (i) a new way for identifying factors that are affecting the future availability of fresh water and provides (ii) insight into the impacts of different development strategies on the future availability of fresh water. WorldWater simulations are clearly demonstrating the strong feedback relation between water availability and different aspects of world development. Results of numerous simulations are contradictory to the assumption made by many global modelers and do confirm that water is an issue on the global scale. Two major observations are made from early model simulations: (a) the use of clean water for dilution and transport of wastewater, if not dealt in other ways, imposes a major stress on the global world water balance; and (b) water use by different sectors is demonstrating quite different dynamics then predicted by classical forecasting tools and other water-models. Inherent linkages between water quantity and quality sectors with food, industry, persistent pollution, technology, and nonrenewable resources sectors of the model create shoot and collapse behavior in water use dynamics. This presentation is discussing a number of different water-related scenarios and their implications on the future water balance. In particular, two extreme scenarios (business as usual - named `Chaos', and unlimited desalination - named `Ocean') will be discussed. Based on the conclusions derived from these two extreme cases a set of more moderate and realistic scenarios (named `Conservation') is proposed.

  20. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  1. Flexible Simulation Tools for Modeling Ion-Driven HEDP Experiments

    NASA Astrophysics Data System (ADS)

    Veitzer, Seth; Sides, Scott; Stoltz, Peter; Barnard, John

    2006-10-01

    We are developing new software libraries to assist in the simulation of planned ion-driven high energy density physics (HEDP) experiments. These libraries are designed to be cross-platform and multi-language so that they may easily be incorporated into multiple simulation packages running on various architectures and written in different languages. Relevant to the production of HEDP states, recently we have implemented models of electronic and nuclear stopping of ions in cold targets. We show how these new stopping algorithms allow us to predict that a beam of 2.82 MeV lithium ions could heat an aluminum foil to 2-3 eV. Such a beam is under consideration for the NDCX II experiment at Lawrence Berkeley National Laboratory. We also discuss modification to these stopping powers for warm targets.

  2. Bounds on collapse models from cold-atom experiments

    NASA Astrophysics Data System (ADS)

    Bilardello, Marco; Donadi, Sandro; Vinante, Andrea; Bassi, Angelo

    2016-11-01

    The spontaneous localization mechanism of collapse models induces a Brownian motion in all physical systems. This effect is very weak, but experimental progress in creating ultracold atomic systems can be used to detect it. In this paper, we considered a recent experiment (Kovachy et al., 2015), where an atomic ensemble was cooled down to picokelvins. Any Brownian motion induces an extra increase of the position variance of the gas. We study this effect by solving the dynamical equations for the Continuous Spontaneous Localizations (CSL) model, as well as for its non-Markovian and dissipative extensions. The resulting bounds, with a 95 % of confidence level, are beaten only by measurements of spontaneous X-ray emission and by experiments with cantilever (in the latter case, only for rC ≥ 10-7 m, where rC is one of the two collapse parameters of the CSL model). We show that, contrary to the bounds given by X-ray measurements, non-Markovian effects do not change the bounds, for any reasonable choice of a frequency cutoff in the spectrum of the collapse noise. Therefore the bounds here considered are more robust. We also show that dissipative effects are unimportant for a large spectrum of temperatures of the noise, while for low temperatures the excluded region in the parameter space is the more reduced, the lower the temperature.

  3. Models from experiments: combinatorial drug perturbations of cancer cells

    PubMed Central

    Nelander, Sven; Wang, Weiqing; Nilsson, Björn; She, Qing-Bai; Pratilas, Christine; Rosen, Neal; Gennemark, Peter; Sander, Chris

    2008-01-01

    We present a novel method for deriving network models from molecular profiles of perturbed cellular systems. The network models aim to predict quantitative outcomes of combinatorial perturbations, such as drug pair treatments or multiple genetic alterations. Mathematically, we represent the system by a set of nodes, representing molecular concentrations or cellular processes, a perturbation vector and an interaction matrix. After perturbation, the system evolves in time according to differential equations with built-in nonlinearity, similar to Hopfield networks, capable of representing epistasis and saturation effects. For a particular set of experiments, we derive the interaction matrix by minimizing a composite error function, aiming at accuracy of prediction and simplicity of network structure. To evaluate the predictive potential of the method, we performed 21 drug pair treatment experiments in a human breast cancer cell line (MCF7) with observation of phospho-proteins and cell cycle markers. The best derived network model rediscovered known interactions and contained interesting predictions. Possible applications include the discovery of regulatory interactions, the design of targeted combination therapies and the engineering of molecular biological networks. PMID:18766176

  4. Selection Experiments in the Penna Model for Biological Aging

    NASA Astrophysics Data System (ADS)

    Medeiros, G.; Idiart, M. A.; de Almeida, R. M. C.

    We consider the Penna model for biological aging to investigate correlations between early fertility and late life survival rates in populations at equilibrium. We consider inherited initial reproduction ages together with a reproduction cost translated in a probability that mother and offspring die at birth, depending on the mother age. For convenient sets of parameters, the equilibrated populations present genetic variability in what regards both genetically programmed death age and initial reproduction age. In the asexual Penna model, a negative correlation between early life fertility and late life survival rates naturally emerges in the stationary solutions. In the sexual Penna model, selection experiments are performed where individuals are sorted by initial reproduction age from the equilibrated populations and the separated populations are evolved independently. After a transient, a negative correlation between early fertility and late age survival rates also emerges in the sense that populations that start reproducing earlier present smaller average genetically programmed death age. These effects appear due to the age structure of populations in the steady state solution of the evolution equations. We claim that the same demographic effects may be playing an important role in selection experiments in the laboratory.

  5. Rolling friction—models and experiment. An undergraduate student project

    NASA Astrophysics Data System (ADS)

    Vozdecký, L.; Bartoš, J.; Musilová, J.

    2014-09-01

    In this paper the rolling friction (rolling resistance) model is studied theoretically and experimentally in undergraduate level fundamental general physics courses. Rolling motions of a cylinder along horizontal or inclined planes are studied by simple experiments, measuring deformations of the underlay or of the rolling body. The rolling of a hard cylinder on a soft underlay as well as of a soft cylinder on a hard underlay is studied. The experimental data are treated by the open source software Tracker, appropriate for use at the undergraduate level of physics. Interpretation of results is based on elementary considerations comprehensible to university students—beginners. It appears that the commonly accepted model of rolling resistance based on the idea of a warp (little bulge) on the underlay in front of the rolling body does not correspond with experimental results even for the soft underlay and hard rolling body. The alternative model of the rolling resistance is suggested in agreement with experiment and the corresponding concept of the rolling resistance coefficient is presented. In addition to the obtained results we can conclude that the project can be used as a task for students in practical exercises of fundamental general physics undergraduate courses. Projects of similar type effectively contribute to the development of the physical thinking of students.

  6. Antimicrobial packaging for fresh-cut fruits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fresh-cut fruits are minimally processed produce which are consumed directly at their fresh stage without any further kill step. Microbiological quality and safety are major challenges to fresh-cut fruits. Antimicrobial packaging is one of the innovative food packaging systems that is able to kill o...

  7. Coupled Thermal-Chemical-Mechanical Modeling of Validation Cookoff Experiments

    SciTech Connect

    ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.

    2000-11-27

    The cookoff of energetic materials involves the combined effects of several physical and chemical processes. These processes include heat transfer, chemical decomposition, and mechanical response. The interaction and coupling between these processes influence both the time-to-event and the violence of reaction. The prediction of the behavior of explosives during cookoff, particularly with respect to reaction violence, is a challenging task. To this end, a joint DoD/DOE program has been initiated to develop models for cookoff, and to perform experiments to validate those models. In this paper, a series of cookoff analyses are presented and compared with data from a number of experiments for the aluminized, RDX-based, Navy explosive PBXN-109. The traditional thermal-chemical analysis is used to calculate time-to-event and characterize the heat transfer and boundary conditions. A reaction mechanism based on Tarver and McGuire's work on RDX{sup 2} was adjusted to match the spherical one-dimensional time-to-explosion data. The predicted time-to-event using this reaction mechanism compares favorably with the validation tests. Coupled thermal-chemical-mechanical analysis is used to calculate the mechanical response of the confinement and the energetic material state prior to ignition. The predicted state of the material includes the temperature, stress-field, porosity, and extent of reaction. There is little experimental data for comparison to these calculations. The hoop strain in the confining steel tube gives an estimation of the radial stress in the explosive. The inferred pressure from the measured hoop strain and calculated radial stress agree qualitatively. However, validation of the mechanical response model and the chemical reaction mechanism requires more data. A post-ignition burn dynamics model was applied to calculate the confinement dynamics. The burn dynamics calculations suffer from a lack of characterization of the confinement for the flaw

  8. Experiments of reconstructing discrete atmospheric dynamic models from data (I)

    NASA Astrophysics Data System (ADS)

    Lin, Zhenshan; Zhu, Yanyu; Deng, Ziwang

    1995-03-01

    In this paper, we give some experimental results of our study in reconstructing discrete atmospheric dynamic models from data. After a great deal of numerical experiments, we found that the logistic map, x n + 1 = 1- μx {2/n}, could be used in monthly mean temperature prediction when it was approaching the chaotic region, and its predictive results were in reverse states to the practical data. This means that the nonlinear developing behavior of the monthly mean temperature system is bifurcating back into the critical chaotic states from the chaotic ones.

  9. Bending lipid membranes: experiments after W. Helfrich's model.

    PubMed

    Bassereau, Patricia; Sorre, Benoit; Lévy, Aurore

    2014-06-01

    Current description of biomembrane mechanics originates for a large part from W. Helfrich's model. Based on his continuum theory, many experiments have been performed in the past four decades on simplified membranes in order to characterize the mechanical properties of lipid membranes and the contribution of polymers or proteins. The long-term goal was to develop a better understanding of the mechanical properties of cell membranes. In this paper, we will review representative experimental approaches that were developed during this period and the main results that were obtained.

  10. Discrete Element Modeling (DEM) of Triboelectrically Charged Particles: Revised Experiments

    NASA Technical Reports Server (NTRS)

    Hogue, Michael D.; Calle, Carlos I.; Curry, D. R.; Weitzman, P. S.

    2008-01-01

    In a previous work, the addition of basic screened Coulombic electrostatic forces to an existing commercial discrete element modeling (DEM) software was reported. Triboelectric experiments were performed to charge glass spheres rolling on inclined planes of various materials. Charge generation constants and the Q/m ratios for the test materials were calculated from the experimental data and compared to the simulation output of the DEM software. In this paper, we will discuss new values of the charge generation constants calculated from improved experimental procedures and data. Also, planned work to include dielectrophoretic, Van der Waals forces, and advanced mechanical forces into the software will be discussed.

  11. Nursing preceptors' experiences of two clinical education models.

    PubMed

    Mamhidir, Anna-Greta; Kristofferzon, Marja-Leena; Hellström-Hyson, Eva; Persson, Elisabeth; Mårtensson, Gunilla

    2014-08-01

    Preceptors play an important role in the process of developing students' knowledge and skills. There is an ongoing search for the best learning and teaching models in clinical education. Little is known about preceptors' perspectives on different models. The aim of the study was to describe nursing preceptors' experiences of two clinical models of clinical education: peer learning and traditional supervision. A descriptive design and qualitative approach was used. Eighteen preceptors from surgical and medical departments at two hospitals were interviewed, ten representing peer learning (student work in pairs) and eight traditional supervision (one student follows a nurse during a shift). The findings showed that preceptors using peer learning created room for students to assume responsibility for their own learning, challenged students' knowledge by refraining from stepping in and encouraged critical thinking. Using traditional supervision, the preceptors' individual ambitions influenced the preceptorship and their own knowledge was empathized as being important to impart. They demonstrated, observed and gradually relinquished responsibility to the students. The choice of clinical education model is important. Peer learning seemed to create learning environments that integrate clinical and academic skills. Investigation of pedagogical models in clinical education should be of major concern to managers and preceptors. PMID:24512652

  12. Blast Loading Experiments of Surrogate Models for Tbi Scenarios

    NASA Astrophysics Data System (ADS)

    Alley, M. D.; Son, S. F.

    2009-12-01

    This study aims to characterize the interaction of explosive blast waves through simulated anatomical models. We have developed physical models and a systematic approach for testing traumatic brain injury (TBI) mechanisms and occurrences. A simplified series of models consisting of spherical PMMA shells housing synthetic gelatins as brain simulants have been utilized. A series of experiments was conducted to compare the sensitivity of the system response to mechanical properties of the simulants under high strain-rate explosive blasts. Small explosive charges were directed at the models to produce a realistic blast wave in a scaled laboratory test cell setting. Blast profiles were measured and analyzed to compare system response severity. High-speed shadowgraph imaging captured blast wave interaction with the head model while particle tracking captured internal response for displacement and strain correlation. The results suggest amplification of shock waves inside the head near material interfaces due to impedance mismatches. In addition, significant relative displacement was observed between the interacting materials suggesting large strain values of nearly 5%. Further quantitative results were obtained through shadowgraph imaging of the blasts confirming a separation of time scales between blast interaction and bulk movement. These results lead to the conclusion that primary blast effects could cause TBI occurrences.

  13. Dissolution-precipitation processes in tank experiments for testing numerical models for reactive transport calculations: Experiments and modelling

    NASA Astrophysics Data System (ADS)

    Poonoosamy, Jenna; Kosakowski, Georg; Van Loon, Luc R.; Mäder, Urs

    2015-06-01

    In the context of testing reactive transport codes and their underlying conceptual models, a simple 2D reactive transport experiment was developed. The aim was to use simple chemistry and design a reproducible and fast to conduct experiment, which is flexible enough to include several process couplings: advective-diffusive transport of solutes, effect of liquid phase density on advective transport, and kinetically controlled dissolution/precipitation reactions causing porosity changes. A small tank was filled with a reactive layer of strontium sulfate (SrSO4) of two different grain sizes, sandwiched between two layers of essentially non-reacting quartz sand (SiO2). A highly concentrated solution of barium chloride was injected to create an asymmetric flow field. Once the barium chloride reached the reactive layer, it forced the transformation of strontium sulfate into barium sulfate (BaSO4). Due to the higher molar volume of barium sulfate, its precipitation caused a decrease of porosity and lowered the permeability. Changes in the flow field were observed with help of dye tracer tests. The experiments were modelled using the reactive transport code OpenGeosys-GEM. Tests with non-reactive tracers performed prior to barium chloride injection, as well as the density-driven flow (due to the high concentration of barium chloride solution), could be well reproduced by the numerical model. To reproduce the mineral bulk transformation with time, two populations of strontium sulfate grains with different kinetic rates of dissolution were applied. However, a default porosity permeability relationship was unable to account for measured pressure changes. Post mortem analysis of the strontium sulfate reactive medium provided useful information on the chemical and structural changes occurring at the pore scale at the interface that were considered in our model to reproduce the pressure evolution with time.

  14. Dissolution-precipitation processes in tank experiments for testing numerical models for reactive transport calculations: Experiments and modelling.

    PubMed

    Poonoosamy, Jenna; Kosakowski, Georg; Van Loon, Luc R; Mäder, Urs

    2015-01-01

    In the context of testing reactive transport codes and their underlying conceptual models, a simple 2D reactive transport experiment was developed. The aim was to use simple chemistry and design a reproducible and fast to conduct experiment, which is flexible enough to include several process couplings: advective-diffusive transport of solutes, effect of liquid phase density on advective transport, and kinetically controlled dissolution/precipitation reactions causing porosity changes. A small tank was filled with a reactive layer of strontium sulfate (SrSO4) of two different grain sizes, sandwiched between two layers of essentially non-reacting quartz sand (SiO2). A highly concentrated solution of barium chloride was injected to create an asymmetric flow field. Once the barium chloride reached the reactive layer, it forced the transformation of strontium sulfate into barium sulfate (BaSO4). Due to the higher molar volume of barium sulfate, its precipitation caused a decrease of porosity and lowered the permeability. Changes in the flow field were observed with help of dye tracer tests. The experiments were modelled using the reactive transport code OpenGeosys-GEM. Tests with non-reactive tracers performed prior to barium chloride injection, as well as the density-driven flow (due to the high concentration of barium chloride solution), could be well reproduced by the numerical model. To reproduce the mineral bulk transformation with time, two populations of strontium sulfate grains with different kinetic rates of dissolution were applied. However, a default porosity permeability relationship was unable to account for measured pressure changes. Post mortem analysis of the strontium sulfate reactive medium provided useful information on the chemical and structural changes occurring at the pore scale at the interface that were considered in our model to reproduce the pressure evolution with time.

  15. First experience of vectorizing electromagnetic physics models for detector simulation

    SciTech Connect

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; Licht, J.de Fine; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-23

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  16. Modelling hot electron generation in short pulse target heating experiments

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Hughes, S. J.

    2013-11-01

    Target heating experiments planned for the Orion laser facility, and electron beam driven fast ignition schemes, rely on the interaction of a short pulse high intensity laser with dense material to generate a flux of energetic electrons. It is essential that the characteristics of this electron source are well known in order to inform transport models in radiation hydrodynamics codes and allow effective evaluation of experimental results and forward modelling of future campaigns. We present results obtained with the particle in cell (PIC) code EPOCH for realistic target and laser parameters, including first and second harmonic light. The hot electron distributions are characterised and their implications for onward transport and target heating are considered with the aid of the Monte-Carlo transport code THOR.

  17. First experience of vectorizing electromagnetic physics models for detector simulation

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; de Fine Licht, J.; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  18. Recent electric oxygen-iodine laser experiments and modeling

    NASA Astrophysics Data System (ADS)

    Carroll, David L.; Benavides, Gabriel F.; Zimmerman, Joseph W.; Woodard, Brian S.; Palla, Andrew D.; Day, Michael T.; Verdeyen, Joseph T.; Solomon, Wayne C.

    2011-03-01

    Experiments and modeling have led to a continuing evolution of the Electric Oxygen-Iodine Laser (ElectricOIL) system. A new concentric discharge geometry has led to improvements in O2(a) production and efficiency and permits higher pressure operation of the discharge at high flow rate. A new heat exchanger design reduces the O2(a) loss and thereby increases the O2(a) delivered into the gain region for a negligible change in flow temperature. These changes have led to an increase in laser cavity gain from 0.26% cm-1 to 0.30% cm-1. New modeling with BLAZE-V shows that an iodine pre-dissociator can have a dramatic impact upon gain and laser performance. As understanding of the ElectricOIL system continues to improve, the design of the laser systematically evolves.

  19. Social aggregation in pea aphids: experiment and random walk modeling.

    PubMed

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J; Topaz, Chad M

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

  20. Social aggregation in pea aphids: experiment and random walk modeling.

    PubMed

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J; Topaz, Chad M

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control. PMID:24376691

  1. Social Aggregation in Pea Aphids: Experiment and Random Walk Modeling

    PubMed Central

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J.; Topaz, Chad M.

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control. PMID:24376691

  2. Effects of Instructional Experience in Clay Modeling Skills on Modeled Human Figure Representation in Preschool Children.

    ERIC Educational Resources Information Center

    Grossman, Ellin

    1980-01-01

    After 15 lessons on clay modeling skills, modeled and drawn human figures by 41 nursery school children were compared to those by control children on formal elements, structure, and detail. The instructional experience significantly improved subjects' clay figure work. No significant sex effect or skill transfer to drawing was found. (SJL)

  3. Polysaccharide production by plant cells in suspension: experiments and mathematical modeling.

    PubMed

    Glicklis, R; Mills, D; Sitton, D; Stortelder, W; Merchuk, J C

    1998-03-20

    Symphytum officinale L cells were grown in Erlenmeyer flasks at four different temperatures: 15, 20, 25, and 30 degrees C. A mathematical model of the culture growth is presented. The intracellular and extracellular products are considered in separate equations. An interrelation between fresh weight, dry weight, and viability is considered in the balances. The model includes a description of the changes in time of wet and dry biomass, cell viability, substrate concentration and polysaccharide concentration, both intra- and extracellular. The model was tested by fitting the numerical results to the data obtained. PMID:10099252

  4. A new Geoengineering Model Intercomparison Project (GeoMIP) experiment designed for climate and chemistry models

    DOE PAGESBeta

    Tilmes, S.; Mills, Mike; Niemeier, Ulrike; Schmidt, Hauke; Robock, Alan; Kravitz, Benjamin S.; Lamarque, J. F.; Pitari, G.; English, J. M.

    2015-01-15

    A new Geoengineering Model Intercomparison Project (GeoMIP) experiment "G4 specified stratospheric aerosols" (short name: G4SSA) is proposed to investigate the impact of stratospheric aerosol geoengineering on atmosphere, chemistry, dynamics, climate, and the environment. In contrast to the earlier G4 GeoMIP experiment, which requires an emission of sulfur dioxide (SO₂) into the model, a prescribed aerosol forcing file is provided to the community, to be consistently applied to future model experiments between 2020 and 2100. This stratospheric aerosol distribution, with a total burden of about 2 Tg S has been derived using the ECHAM5-HAM microphysical model, based on a continuous annualmore » tropical emission of 8 Tg SO₂ yr⁻¹. A ramp-up of geoengineering in 2020 and a ramp-down in 2070 over a period of 2 years are included in the distribution, while a background aerosol burden should be used for the last 3 decades of the experiment. The performance of this experiment using climate and chemistry models in a multi-model comparison framework will allow us to better understand the impact of geoengineering and its abrupt termination after 50 years in a changing environment. The zonal and monthly mean stratospheric aerosol input data set is available at https://www2.acd.ucar.edu/gcm/geomip-g4-specified-stratospheric-aerosol-data-set.« less

  5. A new Geoengineering Model Intercomparison Project (GeoMIP) experiment designed for climate and chemistry models

    SciTech Connect

    Tilmes, S.; Mills, Mike; Niemeier, Ulrike; Schmidt, Hauke; Robock, Alan; Kravitz, Benjamin S.; Lamarque, J. F.; Pitari, G.; English, J. M.

    2015-01-15

    A new Geoengineering Model Intercomparison Project (GeoMIP) experiment "G4 specified stratospheric aerosols" (short name: G4SSA) is proposed to investigate the impact of stratospheric aerosol geoengineering on atmosphere, chemistry, dynamics, climate, and the environment. In contrast to the earlier G4 GeoMIP experiment, which requires an emission of sulfur dioxide (SO₂) into the model, a prescribed aerosol forcing file is provided to the community, to be consistently applied to future model experiments between 2020 and 2100. This stratospheric aerosol distribution, with a total burden of about 2 Tg S has been derived using the ECHAM5-HAM microphysical model, based on a continuous annual tropical emission of 8 Tg SO₂ yr⁻¹. A ramp-up of geoengineering in 2020 and a ramp-down in 2070 over a period of 2 years are included in the distribution, while a background aerosol burden should be used for the last 3 decades of the experiment. The performance of this experiment using climate and chemistry models in a multi-model comparison framework will allow us to better understand the impact of geoengineering and its abrupt termination after 50 years in a changing environment. The zonal and monthly mean stratospheric aerosol input data set is available at https://www2.acd.ucar.edu/gcm/geomip-g4-specified-stratospheric-aerosol-data-set.

  6. Update on PHELIX Pulsed-Power Hydrodynamics Experiments and Modeling

    NASA Astrophysics Data System (ADS)

    Rousculp, Christopher; Reass, William; Oro, David; Griego, Jeffery; Turchi, Peter; Reinovsky, Robert; Devolder, Barbara

    2013-10-01

    The PHELIX pulsed-power driver is a 300 kJ, portable, transformer-coupled, capacitor bank capable of delivering 3-5 MA, 10 μs pulse into a low inductance load. Here we describe further testing and hydrodynamics experiments. First, a 4 nH static inductive load has been constructed. This allows for repetitive high-voltage, high-current testing of the system. Results are used in the calibration of simple circuit models and numerical simulations across a range of bank charges (+/-20 < V0 < +/-40 kV). Furthermore, a dynamic liner-on-target load experiment has been conducted to explore the shock-launched transport of particulates (diam. ~ 1 μm) from a surface. The trajectories of the particulates are diagnosed with radiography. Results are compared to 2D hydro-code simulations. Finally, initial studies are underway to assess the feasibility of using the PHELIX driver as an electromagnetic launcher for planer shock-physics experiments. Work supported by United States-DOE under contract DE-AC52-06NA25396.

  7. Solute and heat transport model of the Henry and hilleke laboratory experiment.

    PubMed

    Langevin, Christian D; Dausman, Alyssa M; Sukop, Michael C

    2010-01-01

    SEAWAT is a coupled version of MODFLOW and MT3DMS designed to simulate variable-density ground water flow and solute transport. The most recent version of SEAWAT, called SEAWAT Version 4, includes new capabilities to represent simultaneous multispecies solute and heat transport. To test the new features in SEAWAT, the laboratory experiment of Henry and Hilleke (1972) was simulated. Henry and Hilleke used warm fresh water to recharge a large sand-filled glass tank. A cold salt water boundary was represented on one side. Adjustable heating pads were used to heat the bottom and left sides of the tank. In the laboratory experiment, Henry and Hilleke observed both salt water and fresh water flow systems separated by a narrow transition zone. After minor tuning of several input parameters with a parameter estimation program, results from the SEAWAT simulation show good agreement with the experiment. SEAWAT results suggest that heat loss to the room was more than expected by Henry and Hilleke, and that multiple thermal convection cells are the likely cause of the widened transition zone near the hot end of the tank. Other computer programs with similar capabilities may benefit from benchmark testing with the Henry and Hilleke laboratory experiment. PMID:19563419

  8. Solute and heat transport model of the Henry and Hilleke laboratory experiment

    USGS Publications Warehouse

    Langevin, C.D.; Dausman, A.M.; Sukop, M.C.

    2010-01-01

    SEAWAT is a coupled version of MODFLOW and MT3DMS designed to simulate variable-density ground water flow and solute transport. The most recent version of SEAWAT, called SEAWAT Version 4, includes new capabilities to represent simultaneous multispecies solute and heat transport. To test the new features in SEAWAT, the laboratory experiment of Henry and Hilleke (1972) was simulated. Henry and Hilleke used warm fresh water to recharge a large sand-filled glass tank. A cold salt water boundary was represented on one side. Adjustable heating pads were used to heat the bottom and left sides of the tank. In the laboratory experiment, Henry and Hilleke observed both salt water and fresh water flow systems separated by a narrow transition zone. After minor tuning of several input parameters with a parameter estimation program, results from the SEAWAT simulation show good agreement with the experiment. SEAWAT results suggest that heat loss to the room was more than expected by Henry and Hilleke, and that multiple thermal convection cells are the likely cause of the widened transition zone near the hot end of the tank. Other computer programs with similar capabilities may benefit from benchmark testing with the Henry and Hilleke laboratory experiment. Journal Compilation ?? 2009 National Ground Water Association.

  9. Development of the Play Experience Model to Enhance Desirable Qualifications of Early Childhood

    ERIC Educational Resources Information Center

    Panpum, Watchara; Soonthornrojana, Wimonrat; Nakunsong, Thatsanee

    2015-01-01

    The objectives of this research were to develop the play experience model and to study the effect of usage in play experience model for enhancing the early childhood's desirable qualification. There were 3 phases of research: 1) the document and context in experience management were studied, 2) the play experience model was developed, and 3) the…

  10. Modeling and Analysis of AGS (1998) Thermal Shock Experiments

    SciTech Connect

    Haines, J.R.; Kim, S.H.; Taleyarkhan, R.P.

    1999-11-14

    An overview is provided on modeling and analysis of thermal shock experiments conducted during 1998 with high-energy, short-pulse energy deposition in a mercury filled container in the Alternating Gradient Synchrotron (AGS) facility at Brookhaven National Laboratory (BNL). The simulation framework utilized along with the results of simulations for pressure and strain profiles are presented. While the magnitude of penk strain predictions versus data are in reasonable agreement, the temporal variations were found to differ significantly in selected cases, indicating lack of modeling of certain physical phenomena or due to uncertainties in the experimental data gathering techniques. Key thermal-shock related issues and uncertainties are highlighted. Specific experiments conducted at BNL's AGS facility during 1998 (the subject of this paper) involved high-energy (24 GeV) proton energy deposition in the mercury target over a time frame of - 0.1s. The target consisted of an - 1 m. long cylindrical stainless steel shell with a hemispherical dome at the leading edge. It was filled with mercury at room temperature and pressure. Several optical strain gages were attached to the surface of the steel target. Figure 1 shows a schematic representation of the test vessel along with the main dimensions and positions of three optical strain gages at which meaningful data were obtained. As

  11. Lattice Boltzmann modeling of directional wetting: comparing simulations to experiments.

    PubMed

    Jansen, H Patrick; Sotthewes, Kai; van Swigchem, Jeroen; Zandvliet, Harold J W; Kooij, E Stefan

    2013-07-01

    Lattice Boltzmann Modeling (LBM) simulations were performed on the dynamic behavior of liquid droplets on chemically striped patterned surfaces, ultimately with the aim to develop a predictive tool enabling reliable design of future experiments. The simulations accurately mimic experimental results, which have shown that water droplets on such surfaces adopt an elongated shape due to anisotropic preferential spreading. Details of the contact line motion such as advancing of the contact line in the direction perpendicular to the stripes exhibit pronounced similarities in experiments and simulations. The opposite of spreading, i.e., evaporation of water droplets, leads to a characteristic receding motion first in the direction parallel to the stripes, while the contact line remains pinned perpendicular to the stripes. Only when the aspect ratio is close to unity, the contact line also starts to recede in the perpendicular direction. Very similar behavior was observed in the LBM simulations. Finally, droplet movement can be induced by a gradient in surface wettability. LBM simulations show good semiquantitative agreement with experimental results of decanol droplets on a well-defined striped gradient, which move from high- to low-contact angle surfaces. Similarities and differences for all systems are described and discussed in terms of the predictive capabilities of LBM simulations to model direction wetting. PMID:23944550

  12. Modeling of Carbon Migration During JET Injection Experiments

    SciTech Connect

    Strachan, J. D.; Likonen, J.; Coad, P.; Rubel, M.; Widdowson, A.; Airila, M.; Andrew, P.; Brezinsek, S.; Corrigan, G.; Esser, H. G.; Jachmich, S.; Kallenbach, A.; Kirschner, A.; Kreter, A.; Matthews, G. F.; Philipps, V.; Pitts, R. A.; Spence, J.; Stamp, M.; Wiesen, S.

    2008-10-15

    JET has performed two dedicated carbon migration experiments on the final run day of separate campaigns (2001 and 2004) using {sup 13}CH{sub 4} methane injected into repeated discharges. The EDGE2D/NIMBUS code modelled the carbon migration in both experiments. This paper describes this modelling and identifies a number of important migration pathways: (1) deposition and erosion near the injection location, (2) migration through the main chamber SOL, (3) migration through the private flux region aided by E x B drifts, and (4) neutral migration originating near the strike points. In H-Mode, type I ELMs are calculated to influence the migration by enhancing erosion during the ELM peak and increasing the long-range migration immediately following the ELM. The erosion/re-deposition cycle along the outer target leads to a multistep migration of {sup 13}C towards the separatrix which is called 'walking'. This walking created carbon neutrals at the outer strike point and led to {sup 13}C deposition in the private flux region. Although several migration pathways have been identified, quantitative analyses are hindered by experimental uncertainty in divertor leakage, and the lack of measurements at locations such as gaps and shadowed regions.

  13. Computer modeling of a three-dimensional steam injection experiment

    SciTech Connect

    Joshi, S.; Castanier, L.M.

    1993-08-01

    The experimental results and CT scans obtained during a steam-flooding experiment with the SUPRI 3-D steam injection laboratory model are compared with the results obtained from a numerical simulator for the same experiment. Simulation studies were carried out using the STARS (Steam and Additives Reservoir Simulator) compositional simulator. The saturation and temperature distributions obtained and heat-loss rates measured in the experimental model at different stages of steam-flooding were compared with those calculated from the numerical simulator. There is a fairly good agreement between the experimental results and the simulator output. However, the experimental scans show a greater degree of gravity override than that obtained with the simulator for the same heat-loss rates. Symmetric sides of the experimental 5-spot show asymmetric heat-loss rates contrary to theory and simulator results. Some utility programs have been written for extracting, processing and outputting the required grid data from the STARS simulator. These are general in nature and can be useful for other STARS users.

  14. Space Weathering of Olivine: Samples, Experiments and Modeling

    NASA Technical Reports Server (NTRS)

    Keller, L. P.; Berger, E. L.; Christoffersen, R.

    2016-01-01

    Olivine is a major constituent of chondritic bodies and its response to space weathering processes likely dominates the optical properties of asteroid regoliths (e.g. S- and many C-type asteroids). Analyses of olivine in returned samples and laboratory experiments provide details and insights regarding the mechanisms and rates of space weathering. Analyses of olivine grains from lunar soils and asteroid Itokawa reveal that they display solar wind damaged rims that are typically not amorphized despite long surface exposure ages, which are inferred from solar flare track densities (up to 10 (sup 7 y)). The olivine damaged rim width rapidly approaches approximately 120 nm in approximately 10 (sup 6 y) and then reaches steady-state with longer exposure times. The damaged rims are nanocrystalline with high dislocation densities, but crystalline order exists up to the outermost exposed surface. Sparse nanophase Fe metal inclusions occur in the damaged rims and are believed to be produced during irradiation through preferential sputtering of oxygen from the rims. The observed space weathering effects in lunar and Itokawa olivine grains are difficult to reconcile with laboratory irradiation studies and our numerical models that indicate that olivine surfaces should readily blister and amorphize on relatively short time scales (less than 10 (sup 3 y)). These results suggest that it is not just the ion fluence alone, but other variable, the ion flux that controls the type and extent of irradiation damage that develops in olivine. This flux dependence argues for caution in extrapolating between high flux laboratory experiments and the natural case. Additional measurements, experiments, and modeling are required to resolve the discrepancies among the observations and calculations involving solar wind processing of olivine.

  15. Blast Loading Experiments of Developed Surrogate Models for TBI Scenarios

    NASA Astrophysics Data System (ADS)

    Alley, Matthew; Son, Steven

    2009-06-01

    This study aims to characterize the interaction of explosive blast waves through simulated anatomical systems. We have developed physical models and a systematic approach for testing traumatic brain injury (TBI) mechanisms and occurrences. A simplified series of models consisting of spherical PMMA shells followed by SLA prototyped skulls housing synthetic gelatins as brain simulants have been utilized. A series of experiments was conducted with the simple geometries to compare the sensitivity of the system response to mechanical properties of the simulants under high strain-rate explosive blasts. Small explosive charges were directed at the models to produce a realistic blast wave in a scaled laboratory setting. Blast profiles were measured and analyzed to compare system response severity. High-speed shadowgraph imaging captured blast wave interaction with the head model while particle tracking captured internal response for displacement and strain correlation. The results suggest amplification of shock waves inside the head due to impedance mismatches. Results from the strain correlations added to the theory of internal shearing between tissues.

  16. Finite Element Modelling of the Apollo Heat Flow Experiments

    NASA Astrophysics Data System (ADS)

    Platt, J.; Siegler, M. A.; Williams, J.

    2013-12-01

    The heat flow experiments sent on Apollo missions 15 and 17 were designed to measure the temperature gradient of the lunar regolith in order to determine the heat flux of the moon. Major problems in these experiments arose from the fact that the astronauts were not able to insert the probes below the thermal skin depth. Compounding the problem, anomalies in the data have prevented scientists from conclusively determining the temperature dependent conductivity of the soil, which enters as a linear function into the heat flow calculation, thus stymieing them in their primary goal of constraining the global heat production of the Moon. Different methods of determining the thermal conductivity have yielded vastly different results resulting in downward corrections of up to 50% in some cases from the original calculations. Along with problems determining the conductivity, the data was inconsistent with theoretical predictions of the temperature variation over time, leading some to suspect that the Apollo experiment itself changed the thermal properties of the localised area surrounding the probe. The average temperature of the regolith, according to the data, increased over time, a phenomenon that makes calculating the thermal conductivity of the soil and heat flux impossible without knowing the source of error and accounting for it. The changes, possibly resulting from as varied sources as the imprint of the Astronauts boots on the lunar surface, compacted soil around the bore stem of the probe or even heat radiating down the inside of the tube, have convinced many people that the recorded data is unusable. In order to shed some light on the possible causes of this temperature rise, we implemented a finite element model of the probe using the program COMSOL Multi-physics as well as Matlab. Once the cause of the temperature rise is known then steps can be taken to account for the failings of the experiment and increase the data's utility.

  17. Modeling experiments that simulate fragment attacks on cased munitions

    SciTech Connect

    Kerrisk, J.F.

    1996-01-01

    Roberts and Field (1993) have conducted experiments to observe the behavior of a cased high explosive (HE) charge subject to fragment attack at impact velocities below those needed for shock initiation. Two and three-dimensional hydrodynamic calculations have been done to model these experiments. Questions about the degree of confinement of the HE and about the condition of the HE during the impact were addressed. The calculations indicate that the HE was not strongly confined in this experiment, primarily due to the lateral expansion of polycarbonate blocks on the sides of the target during the impact. HE was not ejected from the hole in the casing made by the projectile up to 30 {micro}s after the impact. There are hints from these calculations of how initiation of a homogeneous sample of HE might occur in the experiment. The first involves the reshock of a small amount of HE at {approximately} 20 {micro}s as a result of the impact of the sabot on the target. The second involves the heating of the HE from plastic work during the impact. The maximum temperature rise of the HE (exclusive of the small region that was reshocked) was {approximately} 80 k. However, this is the average temperature of a region the size of a computational cell, and phenomena such as shear bands or cracks could result in higher temperatures on a smaller scale than the cell size. The third involves heating of the HE from contact with the casing material. The maximum temperature rise of the casing material from plastic work is {approximately} 870 k. This temperature occurs at the edge of a plug of casing material sheared off by the projectile. Other parts of the casing are shock heated to higher energies but may not contact the HE.

  18. Real-data Calibration Experiments On A Distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Brath, A.; Montanari, A.; Toth, E.

    The increasing availability of extended information on the study watersheds does not generally overcome the need for the determination through calibration of at least a part of the parameters of distributed hydrologic models. The complexity of such models, making the computations highly intensive, has often prevented an extensive analysis of calibration issues. The purpose of this study is an evaluation of the validation results of a series of automatic calibration experiments (using the shuffled complex evolu- tion method, Duan et al., 1992) performed with a highly conceptualised, continuously simulating, distributed hydrologic model applied on the real data of a mid-sized Ital- ian watershed. Major flood events occurred in the 1990-2000 decade are simulated with the parameters obtained by the calibration of the model against discharge data observed at the closure section of the watershed and the hydrological features (overall agreement, volumes, peaks and times to peak) of the discharges obtained both in the closure and in an interior stream-gauge are analysed for validation purposes. A first set of calibrations investigates the effect of the variability of the calibration periods, using the data from several single flood events and from longer, continuous periods. Another analysis regards the influence of rainfall input and it is carried out varying the size and distribution of the raingauge network, in order to examine the relation between the spatial pattern of observed rainfall and the variability of modelled runoff. Lastly, a comparison of the hydrographs obtained for the flood events with the model parameterisation resulting when modifying the objective function to be minimised in the automatic calibration procedure is presented.

  19. Innovative Fresh Water Production Process for Fossil Fuel Plants

    SciTech Connect

    James F. Klausner; Renwei Mei; Yi Li; Jessica Knight

    2006-09-29

    This project concerns a diffusion driven desalination (DDD) process where warm water is evaporated into a low humidity air stream, and the vapor is condensed out to produce distilled water. Although the process has a low fresh water to feed water conversion efficiency, it has been demonstrated that this process can potentially produce low cost distilled water when driven by low grade waste heat. This report summarizes the progress made in the development and analysis of a Diffusion Driven Desalination (DDD) system. Detailed heat and mass transfer analyses required to size and analyze the diffusion tower using a heated water input are described. The analyses agree quite well with the current data and the information available in the literature. The direct contact condenser has also been thoroughly analyzed and the system performance at optimal operating conditions has been considered using a heated water/ambient air input to the diffusion tower. The diffusion tower has also been analyzed using a heated air input. The DDD laboratory facility has successfully been modified to include an air heating section. Experiments have been conducted over a range of parameters for two different cases: heated air/heated water and heated air/ambient water. A theoretical heat and mass transfer model has been examined for both of these cases and agreement between the experimental and theoretical data is good. A parametric study reveals that for every liquid mass flux there is an air mass flux value where the diffusion tower energy consumption is minimal and an air mass flux where the fresh water production flux is maximized. A study was also performed to compare the DDD process with different inlet operating conditions as well as different packing. It is shown that the heated air/heated water case is more capable of greater fresh water production with the same energy consumption than the ambient air/heated water process at high liquid mass flux. It is also shown that there can be

  20. Toward an Improved Understanding of the Global Fresh Water Budget

    NASA Technical Reports Server (NTRS)

    Hildebrand, Peter H.

    2005-01-01

    The major components of the global fresh water cycle include the evaporation from the land and ocean surfaces, precipitation onto the Ocean and land surfaces, the net atmospheric transport of water from oceanic areas over land, and the return flow of water from the land back into the ocean. The additional components of oceanic water transport are few, principally, the mixing of fresh water through the oceanic boundary layer, transport by ocean currents, and sea ice processes. On land the situation is considerably more complex, and includes the deposition of rain and snow on land; water flow in runoff; infiltration of water into the soil and groundwater; storage of water in soil, lakes and streams, and groundwater; polar and glacial ice; and use of water in vegetation and human activities. Knowledge of the key terms in the fresh water flux budget is poor. Some components of the budget, e.g. precipitation, runoff, storage, are measured with variable accuracy across the globe. We are just now obtaining precise measurements of the major components of global fresh water storage in global ice and ground water. The easily accessible fresh water sources in rivers, lakes and snow runoff are only adequately measured in the more affluent portions of the world. presents proposals are suggesting methods of making global measurements of these quantities from space. At the same time, knowledge of the global fresh water resources under the effects of climate change is of increasing importance and the human population grows. This paper provides an overview of the state of knowledge of the global fresh water budget, evaluating the accuracy of various global water budget measuring and modeling techniques. We review the measurement capabilities of satellite instruments as compared with field validation studies and modeling approaches. Based on these analyses, and on the goal of improved knowledge of the global fresh water budget under the effects of climate change, we suggest

  1. Synergy of fresh and accumulated organic matter to bacterial growth.

    PubMed

    Farjalla, Vinicius F; Marinho, Claudio C; Faria, Bias M; Amado, André M; Esteves, Francisco de A; Bozelli, Reinaldo L; Giroldo, Danilo

    2009-05-01

    The main goal of this research was to evaluate whether the mixture of fresh labile dissolved organic matter (DOM) and accumulated refractory DOM influences bacterial production, respiration, and growth efficiency (BGE) in aquatic ecosystems. Bacterial batch cultures were set up using DOM leached from aquatic macrophytes as the fresh DOM pool and DOM accumulated from a tropical humic lagoon. Two sets of experiments were performed and bacterial growth was followed in cultures composed of each carbon substrate (first experiment) and by carbon substrates combined (second experiment), with and without the addition of nitrogen and phosphorus. In both experiments, bacterial production, respiration, and BGE were always higher in cultures with N and P additions, indicating a consistent inorganic nutrient limitation. Bacterial production, respiration, and BGE were higher in cultures set up with leachate DOM than in cultures set up with humic DOM, indicating that the quality of the organic matter pool influenced the bacterial growth. Bacterial production and respiration were higher in the mixture of substrates (second experiment) than expected by bacterial production and respiration in single substrate cultures (first experiment). We suggest that the differences in the concentration of some compounds between DOM sources, the co-metabolism on carbon compound decomposition, and the higher diversity of molecules possibly support a greater bacterial diversity which might explain the higher bacterial growth observed. Finally, our results indicate that the mixture of fresh labile and accumulated refractory DOM that naturally occurs in aquatic ecosystems could accelerate the bacterial growth and bacterial DOM removal. PMID:18985269

  2. Microcosm Experiments and Modeling of Microbial Movement Under Unsaturated Conditions

    SciTech Connect

    Brockman, F.J.; Kapadia, N.; Williams, G.; Rockhold, M.

    2006-04-05

    Colonization of bacteria in porous media has been studied primarily in saturated systems. In this study we examine how microbial colonization in unsaturated porous media is controlled by water content and particle size. This is important for understanding the feasibility and success of bioremediation via nutrient delivery when contaminant degraders are at low densities and when total microbial populations are sparse and spatially discontinuous. The study design used 4 different sand sizes, each at 4 different water contents; experiments were run with and without acetate as the sole carbon source. All experiments were run in duplicate columns and used the motile organism Pseudomonas stutzeri strain KC, a carbon tetrachloride degrader. At a given sand size, bacteria traveled further with increasing volumetric water content. At a given volumetric water content, bacteria generally traveled further with increasing sand size. Water redistribution, solute transport, gas diffusion, and bacterial colonization dynamics were simulated using a numerical finite-difference model. Solute and bacterial transport were modeled using advection-dispersion equations, with reaction rate source/sink terms to account for bacterial growth and substrate utilization, represented using dual Monod-type kinetics. Oxygen transport and diffusion was modeled accounting for equilibrium partitioning between the aqueous and gas phases. The movement of bacteria in the aqueous phase was modeled using a linear impedance model in which the term D{sub m} is a coefficient, as used by Barton and Ford (1995), representing random motility. The unsaturated random motility coefficients we obtained (1.4 x 10{sup -6} to 2.8 x 10{sup -5} cm{sup 2}/sec) are in the same range as those found by others for saturated systems (3.5 x 10{sup -6} to 3.5 x 10{sup -5} cm{sup 2}/sec). The results show that some bacteria can rapidly migrate in well sorted unsaturated sands (and perhaps in relatively high porosity, poorly

  3. Evaporation over fresh and saline water surfaces

    NASA Astrophysics Data System (ADS)

    Abdelrady, Ahmed; Timmermans, Joris; Vekerdy, Zoltan

    2013-04-01

    Evaporation over large water bodies has a crucial role in the global hydrological cycle. Evaporation occurs whenever there is a vapor pressure deficit between a water surface and the atmosphere, and the available energy is sufficient. Salinity affects the density and latent heat of vaporization of the water body, which reflects on the evaporation rate. Different models have been developed to estimate the evaporation process over water surfaces using earth observation data. Most of these models are concerned with the atmospheric parameters. However these models do not take into account the influence of salinity on the evaporation rate; they do not consider the difference in the energy needed for vaporization. For this purpose an energy balance model is required. Several energy balance models that calculate daily evapotranspiration exist, such as the surface energy balance system (SEBS). They estimate the heat fluxes by integration of satellite data and hydro-meteorological field data. SEBS has the advantage that it can be applied over a large scale because it incorporates the physical state of the surface and the aerodynamic resistances in the daily evapotranspiration estimation. Nevertheless this model has not used over water surfaces. The goal of this research is to adapt SEBS to estimate the daily evaporation over fresh and saline water bodies. In particular, 1) water heat flux and roughness of momentum and heat transfer estimation need to be updated, 2) upscaling to daily evaporation needs to be investigated and finally 3) integration of the salinity factor to estimate the evaporation over saline water needs to be performed. Eddy covariance measurements over the Ijsselmeer Lake (The Netherlands) were used to estimate the roughness of momentum and heat transfer at respectively 0.0002 and 0.0001 m. Application of these values over Tana Lake (freshwater), in Ethiopia showed latent heat to be in a good agreement with the measurements, with RMSE of 35.5 Wm-2and r

  4. Model Evaluation and Hindcasting: An Experiment with an Integrated Assessment Model

    SciTech Connect

    Chaturvedi, Vaibhav; Kim, Son H.; Smith, Steven J.; Clarke, Leon E.; Zhou, Yuyu; Kyle, G. Page; Patel, Pralit L.

    2013-11-01

    Integrated assessment models have been extensively used for analyzing long term energy and greenhouse emissions trajectories and have influenced key policies on this subject. Though admittedly these models are focused on the long term trajectories, how well these models are able to capture historical dynamics is an open question. In a first experiment of its kind, we present a framework for evaluation of such integrated assessment models. We use Global Change Assessment Model for this zero order experiment, and focus on the building sector results for USA. We calibrate the model for 1990 and run it forward up to 2095 in five year time steps. This gives us results for 1995, 2000, 2005 and 2010 which we compare to observed historical data at both fuel level and service level. We focus on bringing out the key insights for the wider process of model evaluation through our experiment with GCAM. We begin with highlighting that creation of an evaluation dataset and identification of key evaluation metric is the foremost challenge in the evaluation process. Our analysis highlights that estimation of functional form of the relationship between energy service demand, which is an unobserved variable, and its drivers is a significant challenge in the absence of adequate historical data for both the dependent and driver variables. Historical data availability for key metrics is a serious limiting factor in the process of evaluation. Interestingly, service level data against which such models need to be evaluated are itself a result of models. Thus for energy services, the best we can do is compare our model results with other model results rather than observed and measured data. We show that long term models, by the nature of their construction, will most likely underestimate the rapid growth in some services observed in a short time span. Also, we learn that modeling saturated energy services like space heating is easier than modeling unsaturated services like space cooling

  5. Attributing Sources of Variability in Regional Climate Model Experiments

    NASA Astrophysics Data System (ADS)

    Kaufman, C. G.; Sain, S. R.

    2008-12-01

    Variability in regional climate model (RCM) projections may be due to a number of factors, including the choice of RCM itself, the boundary conditions provided by a driving general circulation model (GCM), and the choice of emission scenario. We describe a new statistical methodology, Gaussian Process ANOVA, which allows us to decompose these sources of variability while also taking account of correlations in the output across space. Our hierarchical Bayesian framework easily allows joint inference about high probability envelopes for the functions, as well as decompositions of total variance that vary over the domain of the functions. These may be used to create maps illustrating the magnitude of each source of variability across the domain of the regional model. We use this method to analyze temperature and precipitation data from the Prudence Project, an RCM intercomparison project in which RCMs were crossed with GCM forcings and scenarios in a designed experiment. This work was funded by the North American Regional Climate Change Assessment Program (NARCCAP).

  6. Theory and Modeling of the Plasma Liner Experiment (PLX)

    NASA Astrophysics Data System (ADS)

    Cassibry, J. T.; Stanic, M. D.; Awe, T. J.; Hanna, D. S.; Davis, J. S.; Hsu, S. C.; Witherspoon, F. D.

    2010-11-01

    High pressures and temperatures may be generated at the center an imploding plasma liner. These phenomena are being studied on the Plasma Liner Experiment (PLX) in which a spherical liner is formed via the merging of plasma jets. The basic physical processes include pulsed plasma acceleration, plasma jet propagation in a vacuum, plasma jet merging, liner formation, liner implosion, stagnation, and rarefaction. Each of these processes is dominated by different physics, requiring different models. For example, λei at the jet merging radius may be ˜1 cm, so that liner formation is partially collisionless, while liner implosion is collision dominated. Further, the liner transitions from optically thin to gray during the implosion. An overview of the theory and modeling plan in support of PLX will be given, which includes 1D rad-hydro, 3D hydro, 3D MHD, 2D PIC, and 2D hybrid codes. We will emphasize our recent 3D hydro modeling, which provides insights into liner formation, implosion, and effects of initial jet parameters on scaling of peak pressure.

  7. Atmospheric Climate Model Experiments Performed at Multiple Horizontal Resolutions

    SciTech Connect

    Phillips, T; Bala, G; Gleckler, P; Lobell, D; Mirin, A; Maxwell, R; Rotman, D

    2007-12-21

    This report documents salient features of version 3.3 of the Community Atmosphere Model (CAM3.3) and of three climate simulations in which the resolution of its latitude-longitude grid was systematically increased. For all these simulations of global atmospheric climate during the period 1980-1999, observed monthly ocean surface temperatures and sea ice extents were prescribed according to standard Atmospheric Model Intercomparison Project (AMIP) values. These CAM3.3 resolution experiments served as control runs for subsequent simulations of the climatic effects of agricultural irrigation, the focus of a Laboratory Directed Research and Development (LDRD) project. The CAM3.3 model was able to replicate basic features of the historical climate, although biases in a number of atmospheric variables were evident. Increasing horizontal resolution also generally failed to ameliorate the large-scale errors in most of the climate variables that could be compared with observations. A notable exception was the simulation of precipitation, which incrementally improved with increasing resolution, especially in regions where orography plays a central role in determining the local hydroclimate.

  8. Influence of fresh date palm co-products on the ripening of a paprika added dry-cured sausage model system.

    PubMed

    Martín-Sánchez, Ana María; Ciro-Gómez, Gelmy; Vilella-Esplá, José; Ben-Abda, Jamel; Pérez-Álvarez, José Ángel; Sayas-Barberá, Estrella

    2014-06-01

    Date palm co-products are a source of bioactive compounds that could be used as a new ingredient for the meat industry. An intermediate food product (IFP) from date palm co-products (5%) was incorporated into a paprika added dry-cured sausage (PADS) model system and was analysed for physicochemical parameters, lipid oxidation and sensory attributes during ripening. Addition of 5% IFP yielded a product with physicochemical properties similar to the traditional one. Instrumental colour differences were found, but were not detected visually by panellists, who also evaluated positively the sensory properties of the PADS with IFP. Therefore, the IFP from date palm co-products could be used as a natural ingredient in the formulation of PADS.

  9. A fresh look at dense hydrogen under pressure. IV. Two structural models on the road from paired to monatomic hydrogen, via a possible non-crystalline phase.

    PubMed

    Labet, Vanessa; Hoffmann, Roald; Ashcroft, N W

    2012-02-21

    In this paper, we examine the transition from a molecular to monatomic solid in hydrogen over a wide pressure range. This is achieved by setting up two models in which a single parameter δ allows the evolution from a molecular structure to a monatomic one of high coordination. Both models are based on a cubic Bravais lattice with eight atoms in the unit cell; one belongs to space group Pa3, the other to space group R3m. In Pa3 one moves from effective 1-coordination, a molecule, to a simple cubic 6-coordinated structure but through a very special point (the golden mean is involved) of 7-coordination. In R3m, the evolution is from 1 to 4 and then to 3 to 6-coordinate. If one studies the enthalpy as a function of pressure as these two structures evolve (δ increases), one sees the expected stabilization of minima with increased coordination (moving from 1 to 6 to 7 in the Pa3 structure, for instance). Interestingly, at some specific pressures, there are in both structures relatively large regions of phase space where the enthalpy remains roughly the same. Although the structures studied are always higher in enthalpy than the computationally best structures for solid hydrogen - those emerging from the Pickard and Needs or McMahon and Ceperley numerical laboratories - this result is suggestive of the possibility of a microscopically non-crystalline or "soft" phase of hydrogen at elevated pressures, one in which there is a substantial range of roughly equi-enthalpic geometries available to the system. A scaling argument for potential dynamic stabilization of such a phase is presented.

  10. Modeling of Alcator C-Mod Divertor Baffling Experiments

    SciTech Connect

    D. P. Stotler; C. S. Pitcher; C. J. Boswell; T. K. Chung; B. LaBombard; B. Lipschultz; J. L. Terry; R. J. Kanzleiter

    2000-11-29

    A specific Alcator C-Mod discharge from the series of divertor baffling experiments is simulated with the DEGAS 2 Monte Carlo neutral transport code. A simple two-point plasma model is used to describe the plasma variation between Langmuir probe locations. A range of conductances for the bypass between the divertor plenum and the main chamber are considered. The experimentally observed insensitivity of the neutral current flowing through the bypass and of the D alpha emissions to the magnitude of the conductance is reproduced. The current of atoms in this regime is being limited by atomic physics processes and not the bypass conductance. The simulated trends in the divertor pressure, bypass current, and D alpha emission agree only qualitatively with the experimental measurements, however. Possible explanations for the quantitative differences are discussed.

  11. Electrostatic Model Applied to ISS Charged Water Droplet Experiment

    NASA Technical Reports Server (NTRS)

    Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.

    2015-01-01

    The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic model developed to study active spacecraft position control by Coulomb charging, is used to simulate the complex orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the experiment. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.

  12. Model Experiment on the Generation of the Korotkoff Sounds

    NASA Astrophysics Data System (ADS)

    Arakawa, Mieko

    1980-07-01

    The mechanism of the genesis of the Korotkoff sounds is investigated by model experiment using latex tubes and a segment of isolated blood vessel of dogs. The tube compliance is much reduced when the transmural (internal minus external) pressure is very high or very low, and increases radically for intermediate states. It can be concluded that there are two different mechanisms by which the sounds are emitted. One is durable sounds caused by self-excited vibration in the process of collapse of the tube, and the other is impulsive sounds emitted transiently when the tube compliance is abruptly decreased. It is possible to explain qualitatively the change of sound phases over a wide range from systole to diastole as an appropriate mixture of durable and impulsive sounds.

  13. Three-dimensional antenna models for fusion experiments

    NASA Astrophysics Data System (ADS)

    Carter, M. D.; Wang, C. Y.; Hogan, J. T.; Harris, J. H.; Hoffman, D. J.; Rasmussen, D. A.; Ryan, P. M.; Stallings, D. S.; Batchelor, D. B.; Beaumont, B.; Hutter, T.; Saoutic, B.

    1996-02-01

    The development of the RANT3D code has permitted the systematic study of the effect of three-dimensional structures on the launched power spectrum for antennas in the ion cyclotron range of frequencies. The code allows the septa between current straps to be modeled with arbitrary heights and permits the antenna to interact with other structures in the tokamak. In this paper we present comparisons of calculated loading with the Tokamak Fusion Test Reactor and Tore Supra experiments, demonstrate the effects on loading caused by positioning uncertainties for an antenna in Tore Supra, and show electric field patterns near the Tore Supra antenna. A poloidal component in the static magnetic field for the plasma response is included in the near-field calculations using the warm plasma code, GLOSI. Preliminary estimates for the heat flux on the bumper limiters during typical operation in Tore Supra are also presented.

  14. Development of Supersonic Combustion Experiments for CFD Modeling

    NASA Technical Reports Server (NTRS)

    Baurle, Robert; Bivolaru, Daniel; Tedder, Sarah; Danehy, Paul M.; Cutler, Andrew D.; Magnotti, Gaetano

    2007-01-01

    This paper describes the development of an experiment to acquire data for developing and validating computational fluid dynamics (CFD) models for turbulence in supersonic combusting flows. The intent is that the flow field would be simple yet relevant to flows within hypersonic air-breathing engine combustors undergoing testing in vitiated-air ground-testing facilities. Specifically, it describes development of laboratory-scale hardware to produce a supersonic combusting coaxial jet, discusses design calculations, operability and types of flames observed. These flames are studied using the dual-pump coherent anti- Stokes Raman spectroscopy (CARS) - interferometric Rayleigh scattering (IRS) technique. This technique simultaneously and instantaneously measures temperature, composition, and velocity in the flow, from which many of the important turbulence statistics can be found. Some preliminary CARS data are presented.

  15. Numerically Modeling Pulsed-Current, Kinked Wire Experiments

    NASA Astrophysics Data System (ADS)

    Filbey, Gordon; Kingman, Pat

    1999-06-01

    The U.S. Army Research Laboratory (ARL) has embarked on a program to provide far-term land fighting vehicles with electromagnetic armor protection. Part of this work seeks to establish robust simulations of magneto-solid-mechanics phenomena. Whether describing violent rupture of a fuse link resulting from a large current pulse or the complete disruption of a copper shaped-charge jet subjected to high current densities, the simulations must include effects of intense Lorentz body forces and rapid Ohmic heating. Material models are required that describe plasticity, flow and fracture, conductivity, and equation of state (EOS) parameters for media in solid, liquid, and vapor phases. An extended version of the Eulerian wave code CTH has been used to predict the apex motion of a V-shaped (``kinked'') copper wire 3mm in diameter during a 400 kilo-amp pulse. These predictions, utilizing available material, EOS, and conductivity data for copper and the known characteristics of an existing capacitor-bank pulsed power supply, were then used to configure an experiment. The experiments were in excellent agreement with the prior simulations. Both computational and experimental results (including electrical data and flash X-rays) will be presented.

  16. Numerical Modelling of the Deep Impact Mission Experiment

    NASA Technical Reports Server (NTRS)

    Wuennemann, K.; Collins, G. S.; Melosh, H. J.

    2005-01-01

    NASA s Deep Impact Mission (launched January 2005) will provide, for the first time ever, insights into the interior of a comet (Tempel 1) by shooting a approx.370 kg projectile onto the surface of a comets nucleus. Although it is usually assumed that comets consist of a very porous mixture of water ice and rock, little is known about the internal structure and in particular the constitutive material properties of a comet. It is therefore difficult to predict the dimensions of the excavated crater. Estimates of the crater size are based on laboratory experiments of impacts into various target compositions of different densities and porosities using appropriate scaling laws; they range between 10 s of meters up to 250 m in diameter [1]. The size of the crater depends mainly on the physical process(es) that govern formation: Smaller sizes are expected if (1) strength, rather than gravity, limits crater growth; and, perhaps even more crucially, if (2) internal energy losses by pore-space collapse reduce the coupling efficiency (compaction craters). To investigate the effect of pore space collapse and strength of the target we conducted a suite of numerical experiments and implemented a novel approach for modeling porosity and the compaction of pores in hydrocode calculations.

  17. State Dependence of Network Output: Modeling and Experiments

    PubMed Central

    Nadim, Farzan; Brezina, Vladimir; Destexhe, Alain; Linster, Christiane

    2008-01-01

    Emerging experimental evidence suggests that both networks and their component neurons respond to similar inputs differently depending on the state of network activity. The network state is determined by the intrinsic dynamical structure of the network and may change as a function of neuromodulation, the balance or stochasticity of synaptic inputs to the network and the history of network activity. Much of the knowledge on state-dependent effects comes from comparisons of awake and sleep states of the mammalian brain. Yet, the mechanisms underlying these states are difficult to unravel. Several vertebrate and invertebrate studies have elucidated cellular and synaptic mechanisms of state-dependence resulting from neuromodulation, sensory input, and experience. Recent studies have combined modeling and experiments to examine the computational principles that emerge when network state is taken into account; these studies are highlighted in this article. We discuss these principles in a variety of systems (mammalian, crustacean, and mollusk) to demonstrate the unifying theme of state-dependence of network output. PMID:19005044

  18. Representing plants as rigid cylinders in experiments and models

    NASA Astrophysics Data System (ADS)

    Vargas-Luna, Andrés; Crosato, Alessandra; Calvani, Giulio; Uijttewaal, Wim S. J.

    2016-07-01

    Simulating the morphological adaptation of water systems often requires including the effects of plants on water and sediment dynamics. Physical and numerical models need representing vegetation in a schematic easily-quantifiable way despite the variety of sizes, shapes and flexibility of real plants. Common approaches represent plants as rigid cylinders, but the ability of these schematizations to reproduce the effects of vegetation on morphodynamic processes has never been analyzed systematically. This work focuses on the consequences of representing plants as rigid cylinders in laboratory tests and numerical simulations. New experiments show that the flow resistance decreases for increasing element Reynolds numbers for both plants and rigid cylinders. Cylinders on river banks can qualitatively reproduce vegetation effects on channel width and bank-related processes. A comparative review of numerical simulations shows that Baptist's method that sums the contribution of bed shear stress and vegetation drag, underestimates bed erosion within sparse vegetation in real rivers and overestimates the mean flow velocity in laboratory experiments. This is due to assuming uniform flow among plants and to an overestimation of the role of the submergence ratio.

  19. Ignition and Growth Modeling of LX-17 Hockey Puck Experiments

    SciTech Connect

    Tarver, C M

    2004-04-19

    Detonating solid plastic bonded explosives (PBX) formulated with the insensitive molecule triaminotrinitrobenzene (TATB) exhibit measurable reaction zone lengths, curved shock fronts, and regions of failing chemical reaction at abrupt changes in the charge geometry. A recent set of ''hockey puck'' experiments measured the breakout times of diverging detonation waves in ambient temperature LX-17 (92.5 % TATB plus 7.5% Kel-F binder) and the breakout times at the lower surfaces of 15 mm thick LX-17 discs placed below the detonator-booster plane. The LX-17 detonation waves in these discs grow outward from the initial wave leaving regions of unreacted or partially reacted TATB in the corners of these charges. This new experimental data is accurately simulated for the first time using the Ignition and Growth reactive flow model for LX-17, which is normalized to a great deal of detonation reaction zone, failure diameter and diverging detonation data. A pressure cubed dependence for the main growth of reaction rate yields excellent agreement with experiment, while a pressure squared rate diverges too quickly and a pressure quadrupled rate diverges too slowly in the LX-17 below the booster equatorial plane.

  20. Hyperspectral imaging technique for determination of pork freshness attributes

    NASA Astrophysics Data System (ADS)

    Li, Yongyu; Zhang, Leilei; Peng, Yankun; Tang, Xiuying; Chao, Kuanglin; Dhakal, Sagar

    2011-06-01

    Freshness of pork is an important quality attribute, which can vary greatly in storage and logistics. The specific objectives of this research were to develop a hyperspectral imaging system to predict pork freshness based on quality attributes such as total volatile basic-nitrogen (TVB-N), pH value and color parameters (L*,a*,b*). Pork samples were packed in seal plastic bags and then stored at 4°C. Every 12 hours. Hyperspectral scattering images were collected from the pork surface at the range of 400 nm to 1100 nm. Two different methods were performed to extract scattering feature spectra from the hyperspectral scattering images. First, the spectral scattering profiles at individual wavelengths were fitted accurately by a three-parameter Lorentzian distribution (LD) function; second, reflectance spectra were extracted from the scattering images. Partial Least Square Regression (PLSR) method was used to establish prediction models to predict pork freshness. The results showed that the PLSR models based on reflectance spectra was better than combinations of LD "parameter spectra" in prediction of TVB-N with a correlation coefficient (r) = 0.90, a standard error of prediction (SEP) = 7.80 mg/100g. Moreover, a prediction model for pork freshness was established by using a combination of TVB-N, pH and color parameters. It could give a good prediction results with r = 0.91 for pork freshness. The research demonstrated that hyperspectral scattering technique is a valid tool for real-time and nondestructive detection of pork freshness.

  1. Kinetic modeling of molecular motors: pause model and parameter determination from single-molecule experiments

    NASA Astrophysics Data System (ADS)

    Morin, José A.; Ibarra, Borja; Cao, Francisco J.

    2016-05-01

    Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model

  2. Recent Experiences in Aftershock Hazard Modelling in New Zealand

    NASA Astrophysics Data System (ADS)

    Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.

    2014-12-01

    The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock

  3. Simple models for the heating curve in magnetic hyperthermia experiments

    NASA Astrophysics Data System (ADS)

    Landi, G. T.

    2013-01-01

    The use of magnetic nanoparticles for magnetic hyperthermia cancer treatment is a rapidly developing field of multidisciplinary research. From the material's standpoint, the main challenge is to optimize the heating properties of the material while maintaining the frequency of the exciting field as low as possible to avoid biological side effects. The figure of merit in this context is the specific absorption rate (SAR), which is usually measured from calorimetric experiments. Such measurements, which we refer to as heating curves, contain a substantial amount of information regarding the energy barrier distribution of the sample. This follows because the SAR itself is a function of temperature, and reflect the underlying magneto-thermal properties of the system. Unfortunately, however, this aspect of the problem is seldom explored and, commonly, only the SAR at ambient temperature is extracted from the heating curve. In this paper we introduce a simple model capable of describing the entire heating curve via a single differential equation. The SAR enters as a forcing term, thus facilitating the use of different models for it. We discuss in detail the heating in the context of Néel relaxation and show that high anisotropy samples may present an inflection point related to the reduction of the energy barrier caused by the increase in temperature. Mono-disperse and poli-disperse systems are discussed in detail and a new alternative to compute the temperature dependence of the SAR from the heating curve is presented.

  4. Bio-inspired design of dental multilayers: experiments and model.

    PubMed

    Niu, Xinrui; Rahbar, Nima; Farias, Stephen; Soboyejo, Wole

    2009-12-01

    This paper combines experiments, simulations and analytical modeling that are inspired by the stress reductions associated with the functionally graded structures of the dentin-enamel-junctions (DEJs) in natural teeth. Unlike conventional crown structures in which ceramic crowns are bonded to the bottom layer with an adhesive layer, real teeth do not have a distinct "adhesive layer" between the enamel and the dentin layers. Instead, there is a graded transition from enamel to dentin within a approximately 10 to 100 microm thick regime that is called the Dentin Enamel Junction (DEJ). In this paper, a micro-scale, bio-inspired functionally graded structure is used to bond the top ceramic layer (zirconia) to a dentin-like ceramic-filled polymer substrate. The bio-inspired functionally graded material (FGM) is shown to exhibit higher critical loads over a wide range of loading rates. The measured critical loads are predicted using a rate dependent slow crack growth (RDEASCG) model. The implications of the results are then discussed for the design of bio-inspired dental multilayers.

  5. Eulerian hydrocode modeling of a dynamic tensile extrusion experiment (u)

    SciTech Connect

    Burkett, Michael W; Clancy, Sean P

    2009-01-01

    Eulerian hydrocode simulations utilizing the Mechanical Threshold Stress flow stress model were performed to provide insight into a dynamic extrusion experiment. The dynamic extrusion response of copper (three different grain sizes) and tantalum spheres were simulated with MESA, an explicit, 2-D Eulerian continuum mechanics hydrocode and compared with experimental data. The experimental data consisted of high-speed images of the extrusion process, recovered extruded samples, and post test metallography. The hydrocode was developed to predict large-strain and high-strain-rate loading problems. Some of the features of the features of MESA include a high-order advection algorithm, a material interface tracking scheme and a van Leer monotonic advection-limiting. The Mechanical Threshold Stress (MTS) model was utilized to evolve the flow stress as a function of strain, strain rate and temperature for copper and tantalum. Plastic strains exceeding 300% were predicted in the extrusion of copper at 400 m/s, while plastic strains exceeding 800% were predicted for Ta. Quantitative comparisons between the predicted and measured deformation topologies and extrusion rate were made. Additionally, predictions of the texture evolution (based upon the deformation rate history and the rigid body rotations experienced by the copper during the extrusion process) were compared with the orientation imaging microscopy measurements. Finally, comparisons between the calculated and measured influence of the initial texture on the dynamic extrusion response of tantalum was performed.

  6. Climate predictability experiments with a general circulation model

    NASA Astrophysics Data System (ADS)

    Bengtsson, L.; Arpe, K.; Roeckner, E.; Schulzweida, U.

    1996-03-01

    The atmospheric response to the evolution of the global sea surface temperatures from 1979 to 1992 is studied using the Max-Planck-Institut 19 level atmospheric general circulation model, ECHAM3 at T 42 resolution. Five separate 14-year integrations are performed and results are presented for each individual realization and for the ensemble-averaged response. The results are compared to a 30-year control integration using a climate monthly mean state of the sea surface temperatures and to analysis data. It is found that the ECHAM3 model, by and large, does reproduce the observed response pattern to El Nino and La Niña. During the El Nino events, the subtropical jet streams in both hemispheres are intensified and displaced equatorward, and there is a tendency towards weak upper easterlies over the equator. The Southern Oscillation is a very stable feature of the integrations and is accurately reproduced in all experiments. The inter-annual variability at middle- and high-latitudes, on the other hand, is strongly dominated by chaotic dynamics, and the tropical SST forcing only modulates the atmospheric circulation. The potential predictability of the model is investigated for six different regions. Signal to noise ratio is large in most parts of the tropical belt, of medium strength in the western hemisphere and generally small over the European area. The ENSO signal is most pronounced during the boreal spring. A particularly strong signal in the precipitation field in the extratropics during spring can be found over the southern United States. Western Canada is normally warmer during the warm ENSO phase, while northern Europe is warmer than normal during the ENSO cold phase. The reason is advection of warm air due to a more intense Pacific low than normal during the warm ENSO phase and a more intense Icelandic low than normal during the cold ENSO phase, respectively.

  7. A New Model for Climate Science Research Experiences for Teachers

    NASA Astrophysics Data System (ADS)

    Hatheway, B.

    2012-12-01

    After two years of running a climate science teacher professional development program for secondary teachers, science educators from UCAR and UNC-Greeley have learned the benefits of providing teachers with ample time to interact with scientists, informal educators, and their teaching peers. Many programs that expose teachers to scientific research do a great job of energizing those teachers and getting them excited about how research is done. We decided to try out a twist on this model - instead of matching teachers with scientists and having them do science in the lab, we introduced the teachers to scientists who agreed share their data and answer questions as the teachers developed their own activities, curricula, and classroom materials related to the research. Prior to their summer experience, the teachers took three online courses on climate science, which increased their background knowledge and gave them an opportunity to ask higher-level questions of the scientists. By spending time with a cohort of practicing teachers, each individual had much needed time to interact with their peers, share ideas, collaborate on curriculum, and learn from each other. And because the goal of the program was to create classroom modules that could be implemented in the coming school year, the teachers were able to both learn about climate science research by interacting with scientists and visiting many different labs, and then create materials using data from the scientists. Without dedicated time for creating these classroom materials, it would have been up to the teachers to carve out time during the school year in order to find ways to apply what they learned in the research experience. We feel this approach worked better for the teachers, had a bigger impact on their students than we originally thought, and gave us a new approach to teacher professional development.

  8. Systematic Study of the Content of Phytochemicals in Fresh and Fresh-Cut Vegetables.

    PubMed

    Alarcón-Flores, María Isabel; Romero-González, Roberto; Vidal, José Luis Martínez; Frenich, Antonia Garrido

    2015-01-01

    Vegetables and fruits have beneficial properties for human health, because of the presence of phytochemicals, but their concentration can fluctuate throughout the year. A systematic study of the phytochemical content in tomato, eggplant, carrot, broccoli and grape (fresh and fresh-cut) has been performed at different seasons, using liquid chromatography coupled to triple quadrupole mass spectrometry. It was observed that phenolic acids (the predominant group in carrot, eggplant and tomato) were found at higher concentrations in fresh carrot than in fresh-cut carrot. However, in the case of eggplant, they were detected at a higher content in fresh-cut than in fresh samples. Regarding tomato, the differences in the content of phenolic acids between fresh and fresh-cut were lower than in other matrices, except in winter sampling, where this family was detected at the highest concentration in fresh tomato. In grape, the flavonols content (predominant group) was higher in fresh grape than in fresh-cut during all samplings. The content of glucosinolates was lower in fresh-cut broccoli than in fresh samples in winter and spring sampling, although this trend changes in summer and autumn. In summary, phytochemical concentration did show significant differences during one-year monitoring, and the families of phytochemicals presented different behaviors depending on the matrix studied. PMID:26783709

  9. Estimating fresh biomass of maize plants from their RGB images in greenhouse phenotyping

    NASA Astrophysics Data System (ADS)

    Ge, Yufeng; Pandey, Piyush; Bai, Geng

    2016-05-01

    High throughput phenotyping (HTP) is an emerging frontier field across many basic and applied plant science disciplines. RGB imaging is most widely used in HTP to extract image-based phenotypes such as pixel volume or projected area. These image-based phenotypes are further used to derive plant physical parameters including plant fresh biomass, plant dry biomass, water use efficiency etc. In this paper, we investigated the robustness of regression models to predict fresh biomass of maize plants from image-based phenotypes. Data used in this study were from three different experiments. Data were grouped into five datasets, two for model development and three for independent model validation. Three image-derived phenotypes were investigated: BioVolume, Projected.Area.1, and Projected.Area.2. Models were assessed with R2, Bias, and RMSEP (Root Mean Squared Error of Prediction). The results showed that almost all models were validated with high R2 values, indicating that these digital phenotypes can be useful to rank plant biomass on a relative basis. However, in many occasions when accurate prediction of plant biomass is needed, it is important for researchers to know that models that relate image-based phenotypes to plant biomass should be carefully constructed. Our results show that the range of plant size and the genotypic diversity of the calibration sets in relation to the validation sets have large impact on the model accuracy. Large maize plants cause systematic bias as they grow toward the top-view camera. Excluding top-view images from modeling can there benefit modeling for the experiments involving large maize plants.

  10. A Curriculum Model for an Integrated Senior Year Clinical Experience.

    ERIC Educational Resources Information Center

    Wukasch, Ruth N.; Blue, Carolyn L.; Overbay, Jane

    2000-01-01

    A flexible clinical experience for nursing seniors integrates pediatrics, public health, and nursing leadership. Experiences in hospital units, schools, nurse-directed clinics, and home visits expose students to a wide range of settings and issues. (SK)

  11. Experiments in Chemistry: A Model Science Software Tool.

    ERIC Educational Resources Information Center

    Malone, Diana; Tinker, Robert

    1984-01-01

    Describes "Experiments in Chemistry," in which experiments are performed using software and hardware interfaced to the Apple microcomputer's game paddle port. Experiments include temperature, pH electrode, and EMF (cell potential determinations, oxidation-reduction titrations, and precipitation titrations) investigations. (JN)

  12. COUNTERCURRENT FLOW LIMITATION EXPERIMENTS AND MODELING FOR IMPROVED REACTOR SAFETY

    SciTech Connect

    Vierow, Karen

    2008-09-26

    This project is investigating countercurrent flow and “flooding” phenomena in light water reactor systems to improve reactor safety of current and future reactors. To better understand the occurrence of flooding in the surge line geometry of a PWR, two experimental programs were performed. In the first, a test facility with an acrylic test section provided visual data on flooding for air-water systems in large diameter tubes. This test section also allowed for development of techniques to form an annular liquid film along the inner surface of the “surge line” and other techniques which would be difficult to verify in an opaque test section. Based on experiences in the air-water testing and the improved understanding of flooding phenomena, two series of tests were conducted in a large-diameter, stainless steel test section. Air-water test results and steam-water test results were directly compared to note the effect of condensation. Results indicate that, as for smaller diameter tubes, the flooding phenomena is predominantly driven by the hydrodynamics. Tests with the test sections inclined were attempted but the annular film was easily disrupted. A theoretical model for steam venting from inclined tubes is proposed herein and validated against air-water data. Empirical correlations were proposed for air-water and steam-water data. Methods for developing analytical models of the air-water and steam-water systems are discussed, as is the applicability of the current data to the surge line conditions. This report documents the project results from July 1, 2005 through June 30, 2008.

  13. [A method for assessing the total viable count of fresh meat based on hyperspectral scattering technique].

    PubMed

    Song, Yu-Lin; Peng, Yan-Kun; Guo, Hui; Zhang, Lei-Lei; Zhao, Juan

    2014-03-01

    The objective of this study is to develop a hyperspectral imaging system to predict the bacteria total viable count in fresh pork. The hyperspectral scattering data were curvefitted by different fitting methods, and correlation differences of models were compared based on the bacteria total viable count of fresh pork, thus providing modeling basis of device for future study. Total 63 fresh pork samples which was used in the experiment were stored at 4 degrees C in the refrigerator of constant temperature. Experiment was performed everyday for 15 days. 4 or 5 random samples were used each day for the experiment. Hyperspectral scattering images and spectral scattering optical data in the wavelength region of 400 to 1 100 nm were acquired from the surface of all of the pork samples. Lorentz and Gompertz function and the modified function was applied to fit the scattering profiles of pork samples. Different parameters could be obtained by Lorentz and Gompertz fitting and the modified function fitting. The different parameters could represent the optical characteristic of the scattering profiles. The standard values of the bacteria total viable count of pork were obtained by classical microbiological plating methods. Because the standard value of the bacteria total viable count was big, log10 of the bacteria total viable count obtained by classical microbiological plating was used to simplify the calculation. Both individual parameters and integrated parameters were explored to develop the models. The multi-linear regression statistical approach was used to establish the models for predicting pork the bacteria total viable count. Both Lorentz and Gompertz function and the modified function included three and four parameters formula. The results showed that correlation coefficient of the models is higher with Lorentz three parameters combination, Lorentz four parameters combination and Gompertz four parameters combination than the individual parameters and other two or

  14. [A method for assessing the total viable count of fresh meat based on hyperspectral scattering technique].

    PubMed

    Song, Yu-Lin; Peng, Yan-Kun; Guo, Hui; Zhang, Lei-Lei; Zhao, Juan

    2014-03-01

    The objective of this study is to develop a hyperspectral imaging system to predict the bacteria total viable count in fresh pork. The hyperspectral scattering data were curvefitted by different fitting methods, and correlation differences of models were compared based on the bacteria total viable count of fresh pork, thus providing modeling basis of device for future study. Total 63 fresh pork samples which was used in the experiment were stored at 4 degrees C in the refrigerator of constant temperature. Experiment was performed everyday for 15 days. 4 or 5 random samples were used each day for the experiment. Hyperspectral scattering images and spectral scattering optical data in the wavelength region of 400 to 1 100 nm were acquired from the surface of all of the pork samples. Lorentz and Gompertz function and the modified function was applied to fit the scattering profiles of pork samples. Different parameters could be obtained by Lorentz and Gompertz fitting and the modified function fitting. The different parameters could represent the optical characteristic of the scattering profiles. The standard values of the bacteria total viable count of pork were obtained by classical microbiological plating methods. Because the standard value of the bacteria total viable count was big, log10 of the bacteria total viable count obtained by classical microbiological plating was used to simplify the calculation. Both individual parameters and integrated parameters were explored to develop the models. The multi-linear regression statistical approach was used to establish the models for predicting pork the bacteria total viable count. Both Lorentz and Gompertz function and the modified function included three and four parameters formula. The results showed that correlation coefficient of the models is higher with Lorentz three parameters combination, Lorentz four parameters combination and Gompertz four parameters combination than the individual parameters and other two or

  15. Review of free-surface MHD experiments and modeling.

    SciTech Connect

    Molokov, S.; Reed, C. B.

    2000-06-02

    This review paper was prepared to survey the present status of analytical and experimental work in the area of free surface MHD and thus provide a well informed starting point for further work by the Advanced Limiter-diverter Plasma-facing Systems (ALPS) program. ALPS were initiated to evaluate the potential for improved performance and lifetime for plasma-facing systems. The main goal of the program is to demonstrate the advantages of advanced limiter/diverter systems over conventional systems in terms of power density capability, component lifetime, and power conversion efficiency, while providing for safe operation and minimizing impurity concerns for the plasma. Most of the work to date has been applied to free surface liquids. A multi-disciplinary team from several institutions has been organized to address the key issues associated with these systems. The main performance goals for advanced limiters and diverters are a peak heat flux of >50 MW/m{sup 2}, elimination of a lifetime limit for erosion, and the ability to extract useful heat at high power conversion efficiency ({approximately}40%). The evaluation of various options is being conducted through a combination of laboratory experiments, modeling of key processes, and conceptual design studies.

  16. Storytelling Voice Conversion: Evaluation Experiment Using Gaussian Mixture Models

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Ďuračková, Daniela

    2015-07-01

    In the development of the voice conversion and personification of the text-to-speech (TTS) systems, it is very necessary to have feedback information about the users' opinion on the resulting synthetic speech quality. Therefore, the main aim of the experiments described in this paper was to find out whether the classifier based on Gaussian mixture models (GMM) could be applied for evaluation of different storytelling voices created by transformation of the sentences generated by the Czech and Slovak TTS system. We suppose that it is possible to combine this GMM-based statistical evaluation with the classical one in the form of listening tests or it can replace them. The results obtained in this way were in good correlation with the results of the conventional listening test, so they confirm practical usability of the developed GMM classifier. With the help of the performed analysis, the optimal setting of the initial parameters and the structure of the input feature set for recognition of the storytelling voices was finally determined.

  17. Improved LHCD simulation model and implication for future experiments

    NASA Astrophysics Data System (ADS)

    Shiraiwa, S.; Wallace, G.; Baek, S.; Bonoli, P.; Faust, I.; Parker, R.; Labombard, B.; White, A.; Wukitch, S.

    2015-11-01

    The simulation model for LHCD using the raytracing/FokkerPlanck (GENRAY/CQL3D) code has been improved. Including realistic 2D SOL profiles resolves the discrepancy previously observed at high density (ne > 1 ×1020m-3). Impact of nonlinear interaction in front of the launcher is investigated. It is shown that the distortion of launch n| | spectrum is rather small (up to 10% of injected power). These simulation results suggest that improvement of current drive observed on Alcator C-Mod is indeed caused by realizing preferable SOL plasma profiles. Implication of these results to future experiments will be discussed. In order to minimize edge parasitic losses, realizing high single pass absorption and reducing prompt losses in front of launcher are both crucial. The advantage of LH launch from low field side (LFS) and high field side (HFS) is compared in this regards. A compact LH launcher suitable to test LH wave launch from HFS on a small scale device is designed and its plasma coupling characteristic will be presented. This work was performed on the Alcator C-Mod tokamak, a DoE Office of Science user facility, and is supported by USDoE awards DE-FC02-99ER54512 and DE-AC02-09CH11466.

  18. Hydrodynamic Modeling of the Plasma Liner Experiment (PLX)

    NASA Astrophysics Data System (ADS)

    Cassibry, Jason; Hsu, Scott; Witherspoon, Doug; Gilmore, Marc

    2009-11-01

    Implosions of plasma liners in cylindrically or spherically convergent geometries can produce high pressures and temperatures with a confinement or dwell time of the order of the rarefaction timescale of the liner. The Plasma Liner Experiment (PLX), to be built at LANL, will explore and demonstrate the feasibility of forming imploding plasma liners with the spherical convergence of hypersonic plasma jets. Modeling will be performed using SPHC and MACH2. According to preliminary 3D SPHC results, high Z plasma liners imploding on vacuum with ˜1.5MJ of initial stored energy will reach ˜100kbar, which is a main objective of the experimental program. Among the objectives of the theoretical PLX effort are to assist in the diagnostic analysis of the PLX, identify possible deleterious effects due to instabilities or asymmetries, identify departures from ideal behavior due to thermal and radiative transport, and help determine scaling laws for possible follow-on applications of ˜1 Mbar HEDP plasmas and magneto-inertial fusion. An overview of the plan to accomplish these objectives will be presented, and preliminary results will be summarized.

  19. Island Divertor Plate Modeling for the Compact Toroidal Hybrid Experiment

    NASA Astrophysics Data System (ADS)

    Hartwell, G. J.; Massidda, S. D.; Ennis, D. A.; Knowlton, S. F.; Maurer, D. A.; Bader, A.

    2015-11-01

    Edge magnetic island divertors can be used as a method of plasma particle and heat exhaust in long pulse stellarator experiments. Detailed power loading on these structures and its relationship to the long connection length scrape off layer physics is a new Compact Toroidal Hybrid (CTH) research thrust. CTH is a five field period, l = 2 torsatron with R0 = 0 . 75 m, ap ~ 0 . 2 m, and | B | <= 0 . 7 T. For these studies CTH is configured as a pure stellarator using a 28 GHz, 200 kW gyrotron operating at 2nd harmonic for ECRH. We report the results of EMC3-EIRENE modeling of divertor plates near magnetic island structures. The edge rotational transform is varied by adjusting the ratio of currents in the helical and toroidal field coils. A poloidal field coil adjusts the shear of the rotational transform profile, and width of the magnetic island, while the phase of the island is rotated with a set of five error coils producing an n = 1 perturbation. For the studies conducted, a magnetic configuration with a large n = 1 , m = 3 magnetic island at the edge is generated. Results from multiple potential divertor plate locations will be presented and discussed. This work is supported by U.S. Department of Energy Grant No. DE-FG02-00ER54610.

  20. Equivalent acoustic impedance model. Part 1: experiments and semi-physical model

    NASA Astrophysics Data System (ADS)

    Faverjon, B.; Soize, C.

    2004-09-01

    The context of this research is devoted to the construction of an equivalent acoustic impedance model for a soundproofing scheme consisting of a three-dimensional porous medium inserted between two thin plates. Part 1 of this paper presents the experiments performed and a probabilistic algebraic model of the wall acoustic impedance constructed using the experimental data basis for the medium- and high-frequency ranges. The probabilistic algebraic model is constructed by using the general mathematical properties of wall acoustic impedance operators (symmetry, odd and even functions with respect to the frequency, decreasing functions when frequency goes to infinity, behaviour when frequency goes to zero and so on). The parameters introduced in this probabilistic algebraic model are fitted with the experimental data basis. Finally, this probabilistic algebraic model summarizes all the experimental data bases and consequently can be reused for other researches.

  1. Model slope infiltration experiments for shallow landslides early warning

    NASA Astrophysics Data System (ADS)

    Damiano, E.; Greco, R.; Guida, A.; Olivares, L.; Picarelli, L.

    2009-04-01

    simple empirical models [Versace et al., 2003] based on correlation between some features of rainfall records (cumulated height, duration, season etc.) and the correspondent observed landslides. Laboratory experiments on instrumented small scale slope models represent an effective way to provide data sets [Eckersley, 1990; Wang and Sassa, 2001] useful for building up more complex models of landslide triggering prediction. At the Geotechnical Laboratory of C.I.R.I.AM. an instrumented flume to investigate on the mechanics of landslides in unsaturated deposits of granular soils is available [Olivares et al. 2003; Damiano, 2004; Olivares et al., 2007]. In the flume a model slope is reconstituted by a moist-tamping technique and subjected to an artificial uniform rainfall since failure happens. The state of stress and strain of the slope is monitored during the entire test starting from the infiltration process since the early post-failure stage: the monitoring system is constituted by several mini-tensiometers placed at different locations and depths, to measure suction, mini-transducers to measure positive pore pressures, laser sensors, to measure settlements of the ground surface, and high definition video-cameras to obtain, through a software (PIV) appositely dedicated, the overall horizontal displacement field. Besides, TDR sensors, used with an innovative technique [Greco, 2006], allow to reconstruct the water content profile of soil along the entire thickness of the investigated deposit and to monitor its continuous changes during infiltration. In this paper a series of laboratory tests carried out on model slopes in granular pyroclastic soils taken in the mountainous area north-eastern of Napoli, are presented. The experimental results demonstrate the completeness of information provided by the various sensors installed. In particular, very useful information is given by the coupled measurements of soil water content by TDR and suction by tensiometers. Knowledge of

  2. CFD Simulation of the distribution of ClO2 in fresh produce to improve safety

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The shelf life of fresh-cut produce may be prolonged with the injection of bactericide gases like chlorine dioxide (ClO2). A comparative study has been conducted by modeling the injection of three different gases, CO2, ClO2 and N2 inside a PET clamshell containers commonly use to package fresh produ...

  3. Hydrogeochemical tool to identify salinization or freshening of coastal aquifers determined from combined field work, experiments, and modeling.

    PubMed

    Russak, Amos; Sivan, Orit

    2010-06-01

    This study proposes a hydrogeochemical tool to distinguish between salinization and freshening events of a coastal aquifer and quantifies their effect on groundwater characteristics. This is based on the chemical composition of the fresh-saline water interface (FSI) determined from combined field work, column experiments with the same sediments, and modeling. The experimental results were modeled using the PHREEQC code and were compared to field data from the coastal aquifer of Israel. The decrease in the isotopic composition of the dissolved inorganic carbon (delta(13)C(DIC)) of the saline water indicates that, during seawater intrusion and coastal salinization, oxidation of organic carbon occurs. However, the main process operating during salinization or freshening events in coastal aquifers is cation exchange. The relative changes in Ca(2+), Sr(2+), and K(+) concentrations during salinization and freshening events are used as a reliable tool for characterizing the status of a coastal aquifer. The field data suggest that coastal aquifers may switch from freshening to salinization on a seasonal time scale.

  4. DISPERSIBILITY OF CRUDE OIL IN FRESH WATER

    EPA Science Inventory

    The effects of surfactant composition on the ability of chemical dispersants to disperse crude oil in fresh water were investigated. The objective of this research was to determine whether effective fresh water dispersants can be designed in case this technology is ever consider...

  5. Fresh fruit: microstructure, texture and quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fresh-cut produce has a huge following in today’s supermarkets. The trend follows the need to decrease preparation time as well as the desire to follow the current health guidelines for consumption of more whole “heart-healthy” foods. Additionally, consumers are able to enjoy a variety of fresh prod...

  6. Microbial Safety of Fresh Produce - Preface

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fresh produce has been the source of recent outbreaks of foodborne illness which have caused sickness, hospitalizations and deaths of consumers, as well as serious adverse economic impact on growers and processors. The preface for the book entitled “Microbial Safety of Fresh Produce” discusses possi...

  7. Recovering fresh water stored in saline limestone aquifers.

    USGS Publications Warehouse

    Merritt, M.L.

    1986-01-01

    Numerical modeling techniques are used to examine the hydrogeologic, design, and management factors governing the recovery efficiency of subsurface fresh-water storage. The modeling approach permitted many combinations of conditions to be studied. A sensitivity analysis was used that consisted of varying certain parameters while keeping constant as many other parameters or processes as possible. The results show that a loss of recovery efficiency resulted from: 1) processes causing mixing of injected fresh water with native saline water (hydrodynamic dispersion); 2) processes or conditions causing the irreversible displacement of the injected fresh water with respect to the well (buoyancy stratification and background hydraulic gradients); or 3) processes or procedures causing injection and withdrawal flow patterns to be dissimilar (dissimilar injection and withdrawal schedules in multiple-well systems). Other results indicated that recovery efficiency improved considerably during the first several successive cycles, provided that each recovery phase ended whgen the chloride concentration of withdrawn water exceeded established criteria for potability (usually 250 milligrams per liter). Other findings were that fresh water injected into highly permeable or highly saline aquifers would buoy rapidly with a deleterious effect on recovery efficiency. -Author

  8. Model slope infiltration experiments for shallow landslides early warning

    NASA Astrophysics Data System (ADS)

    Damiano, E.; Greco, R.; Guida, A.; Olivares, L.; Picarelli, L.

    2009-04-01

    simple empirical models [Versace et al., 2003] based on correlation between some features of rainfall records (cumulated height, duration, season etc.) and the correspondent observed landslides. Laboratory experiments on instrumented small scale slope models represent an effective way to provide data sets [Eckersley, 1990; Wang and Sassa, 2001] useful for building up more complex models of landslide triggering prediction. At the Geotechnical Laboratory of C.I.R.I.AM. an instrumented flume to investigate on the mechanics of landslides in unsaturated deposits of granular soils is available [Olivares et al. 2003; Damiano, 2004; Olivares et al., 2007]. In the flume a model slope is reconstituted by a moist-tamping technique and subjected to an artificial uniform rainfall since failure happens. The state of stress and strain of the slope is monitored during the entire test starting from the infiltration process since the early post-failure stage: the monitoring system is constituted by several mini-tensiometers placed at different locations and depths, to measure suction, mini-transducers to measure positive pore pressures, laser sensors, to measure settlements of the ground surface, and high definition video-cameras to obtain, through a software (PIV) appositely dedicated, the overall horizontal displacement field. Besides, TDR sensors, used with an innovative technique [Greco, 2006], allow to reconstruct the water content profile of soil along the entire thickness of the investigated deposit and to monitor its continuous changes during infiltration. In this paper a series of laboratory tests carried out on model slopes in granular pyroclastic soils taken in the mountainous area north-eastern of Napoli, are presented. The experimental results demonstrate the completeness of information provided by the various sensors installed. In particular, very useful information is given by the coupled measurements of soil water content by TDR and suction by tensiometers. Knowledge of

  9. Modeling of coherent ultrafast magneto-optical experiments: Light-induced molecular mean-field model

    SciTech Connect

    Hinschberger, Y.; Hervieux, P.-A.

    2015-12-28

    We present calculations which aim to describe coherent ultrafast magneto-optical effects observed in time-resolved pump-probe experiments. Our approach is based on a nonlinear semi-classical Drude-Voigt model and is used to interpret experiments performed on nickel ferromagnetic thin film. Within this framework, a phenomenological light-induced coherent molecular mean-field depending on the polarizations of the pump and probe pulses is proposed whose microscopic origin is related to a spin-orbit coupling involving the electron spins of the material sample and the electric field of the laser pulses. Theoretical predictions are compared to available experimental data. The model successfully reproduces the observed experimental trends and gives meaningful insight into the understanding of magneto-optical rotation behavior in the ultrafast regime. Theoretical predictions for further experimental studies are also proposed.

  10. Experiments versus modeling of buoyant drying of porous media

    NASA Astrophysics Data System (ADS)

    Salin, D.; Yiotis, A.; Tajer, E.; Yortsos, Y. C.

    2012-12-01

    Experiments versus modeling of buoyant drying of porous media D. Salin and A.G. Yiotis, Laboratoire FAST, Univ Pierre & Marie Curie, Univ. Paris-Sud, CNRS, Orsay 91405, France and E.S. Tajer and Y.C. Yortsos, Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1450 A series of isothermal drying experiments in packed glass beads saturated with volatile hydrocarbons (hexane or pentane) are conducted. The transparent glass cells containing the packing allow for the visual monitoring of the phase distribution patterns below the surface, including the formation of liquid films, as the gaseous phase invades the pore space, and for the control of the thickness of the diffusive mass boundary layer over the packing. We demonstrate the existence of an early Constant Rate Period, CRP, that lasts as long as the films saturate the surface of the packing, and of a subsequent Falling Rate Period, FRP, that begins practically after the detachment of the film tips from the external surface. During the CRP, the process is controlled by diffusion within the stagnant gaseous phase in the upper part of the cells, yielding a Stefan tube problem solution. During the FRP, the process is controlled by diffusion within the packing, with a drying rate inversely proportional to the observed position of the film tips in the cell. The critical residual liquid saturation that marks the transition between these two regimes is found to be a function of the average bead size in our packs and the incline of the cells with respect to the flat vertical, with larger beads and angles closer to the vertical position leading to earlier film detachment times and higher critical saturations. We developed a model for the drying of porous media in the presence of gravity. It incorporated effects of corner film flow, internal and external mass transfer and the effect of gravity. Analytical results were derived when gravity opposes

  11. Earthquake nucleation mechanisms and periodic loading: Models, Experiments, and Observations

    NASA Astrophysics Data System (ADS)

    Dahmen, K.; Brinkman, B.; Tsekenis, G.; Ben-Zion, Y.; Uhl, J.

    2010-12-01

    The project has two main goals: (a) Improve the understanding of how earthquakes are nucleated ¬ with specific focus on seismic response to periodic stresses (such as tidal or seasonal variations) (b) Use the results of (a) to infer on the possible existence of precursory activity before large earthquakes. A number of mechanisms have been proposed for the nucleation of earthquakes, including frictional nucleation (Dieterich 1987) and fracture (Lockner 1999, Beeler 2003). We study the relation between the observed rates of triggered seismicity, the period and amplitude of cyclic loadings and whether the observed seismic activity in response to periodic stresses can be used to identify the correct nucleation mechanism (or combination of mechanisms). A generalized version of the Ben-Zion and Rice model for disordered fault zones and results from related recent studies on dislocation dynamics and magnetization avalanches in slowly magnetized materials are used in the analysis (Ben-Zion et al. 2010; Dahmen et al. 2009). The analysis makes predictions for the statistics of macroscopic failure events of sheared materials in the presence of added cyclic loading, as a function of the period, amplitude, and noise in the system. The employed tools include analytical methods from statistical physics, the theory of phase transitions, and numerical simulations. The results will be compared to laboratory experiments and observations. References: Beeler, N.M., D.A. Lockner (2003). Why earthquakes correlate weakly with the solid Earth tides: effects of periodic stress on the rate and probability of earthquake occurrence. J. Geophys. Res.-Solid Earth 108, 2391-2407. Ben-Zion, Y. (2008). Collective Behavior of Earthquakes and Faults: Continuum-Discrete Transitions, Evolutionary Changes and Corresponding Dynamic Regimes, Rev. Geophysics, 46, RG4006, doi:10.1029/2008RG000260. Ben-Zion, Y., Dahmen, K. A. and J. T. Uhl (2010). A unifying phase diagram for the dynamics of sheared solids

  12. Mesoscale Modeling During Mixed-Phase Arctic Cloud Experiment

    SciTech Connect

    Avramov, A.; Harringston, J.Y.; Verlinde, J.

    2005-03-18

    Mixed-phase arctic stratus clouds are the predominant cloud type in the Arctic (Curry et al. 2000) and through various feedback mechanisms exert a strong influence on the Arctic climate. Perhaps one of the most intriguing of their features is that they tend to have liquid tops that precipitate ice. Despite the fact that this situation is colloidally unstable, these cloud systems are quite long lived - from a few days to over a couple of weeks. It has been hypothesized that mixed-phase clouds are maintained through a balance between liquid water condensation resulting from the cloud-top radiative cooling and ice removal by precipitation (Pinto 1998; Harrington et al. 1999). In their modeling study Harrington et al. (1999) found that the maintenance of this balance depends strongly on the ambient concentration of ice forming nucleus (IFN). In a follow-up study, Jiang et al. (2002), using only 30% of IFN concentration predicted by Meyers et al. (1992) IFN parameterization were able to obtain results similar to the observations reported by Pinto (1998). The IFN concentration measurements collected during the Mixed-Phase Arctic Cloud Experiment (M-PACE), conducted in October 2004 over the North Slope of Alaska and the Beaufort Sea (Verlinde et al. 2005), also showed much lower values then those predicted (Prenne, pers. comm.) by currently accepted ice nucleation parameterizations (e.g. Meyers et al. 1992). The goal of this study is to use the extensive IFN data taken during M-PACE to examine what effects low IFN concentrations have on mesoscale cloud structure and coastal dynamics.

  13. User's instructions for the GE cardiovascular model to simulate LBNP and tilt experiments, with graphic capabilities

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The present form of this cardiovascular model simulates both 1-g and zero-g LBNP (lower body negative pressure) experiments and tilt experiments. In addition, the model simulates LBNP experiments at any body angle. The model is currently accessible on the Univac 1110 Time-Shared System in an interactive operational mode. Model output may be in tabular form and/or graphic form. The graphic capabilities are programmed for the Tektronix 4010 graphics terminal and the Univac 1110.

  14. Response Surface Model Building Using Orthogonal Arrays for Computer Experiments

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Braun, Robert D.; Moore, Arlene A.; Lepsch, Roger A.

    1997-01-01

    This study investigates response surface methods for computer experiments and discusses some of the approaches available. Orthogonal arrays constructed for computer experiments are studied and an example application to a technology selection and optimization study for a reusable launch vehicle is presented.

  15. Model-experiment interaction to improve representation of phosphorus limitation in land models

    NASA Astrophysics Data System (ADS)

    Norby, R. J.; Yang, X.; Cabugao, K. G. M.; Childs, J.; Gu, L.; Haworth, I.; Mayes, M. A.; Porter, W. S.; Walker, A. P.; Weston, D. J.; Wright, S. J.

    2015-12-01

    Carbon-nutrient interactions play important roles in regulating terrestrial carbon cycle responses to atmospheric and climatic change. None of the CMIP5 models has included routines to represent the phosphorus (P) cycle, although P is commonly considered to be the most limiting nutrient in highly productive, lowland tropical forests. Model simulations with the Community Land Model (CLM-CNP) show that inclusion of P coupling leads to a smaller CO2 fertilization effect and warming-induced CO2 release from tropical ecosystems, but there are important uncertainties in the P model, and improvements are limited by a dearth of data. Sensitivity analysis identifies the relative importance of P cycle parameters in determining P availability and P limitation, and thereby helps to define the critical measurements to make in field campaigns and manipulative experiments. To improve estimates of P supply, parameters that describe maximum amount of labile P in soil and sorption-desorption processes are necessary for modeling the amount of P available for plant uptake. Biochemical mineralization is poorly constrained in the model and will be improved through field observations that link root traits to mycorrhizal activity, phosphatase activity, and root depth distribution. Model representation of P demand by vegetation, which currently is set by fixed stoichiometry and allometric constants, requires a different set of data. Accurate carbon cycle modeling requires accurate parameterization of the photosynthetic machinery: Vc,max and Jmax. Relationships between the photosynthesis parameters and foliar nutrient (N and P) content are being developed, and by including analysis of covariation with other plant traits (e.g., specific leaf area, wood density), we can provide a basis for more dynamic, trait-enabled modeling. With this strong guidance from model sensitivity and uncertainty analysis, field studies are underway in Puerto Rico and Panama to collect model-relevant data on P

  16. 9 CFR 319.141 - Fresh pork sausage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Fresh pork sausage. 319.141 Section... INSPECTION AND CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Sausage Generally: Fresh Sausage § 319.141 Fresh pork sausage. “Fresh Pork Sausage” is sausage prepared with fresh pork or...

  17. 9 CFR 319.142 - Fresh beef sausage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Fresh beef sausage. 319.142 Section... INSPECTION AND CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION Sausage Generally: Fresh Sausage § 319.142 Fresh beef sausage. “Fresh Beef Sausage” is sausage prepared with fresh beef or...

  18. Using the Bifocal Modeling Framework to Resolve "Discrepant Events" Between Physical Experiments and Virtual Models in Biology

    NASA Astrophysics Data System (ADS)

    Blikstein, Paulo; Fuhrmann, Tamar; Salehi, Shima

    2016-08-01

    In this paper, we investigate an approach to supporting students' learning in science through a combination of physical experimentation and virtual modeling. We present a study that utilizes a scientific inquiry framework, which we call "bifocal modeling," to link student-designed experiments and computer models in real time. In this study, a group of high school students designed computer models of bacterial growth with reference to a simultaneous physical experiment they were conducting, and were able to validate the correctness of their model against the results of their experiment. Our findings suggest that as the students compared their virtual models with physical experiments, they encountered "discrepant events" that contradicted their existing conceptions and elicited a state of cognitive disequilibrium. This experience of conflict encouraged students to further examine their ideas and to seek more accurate explanations of the observed natural phenomena, improving the design of their computer models.

  19. A mitigation scheme for underwater blast: Experiments and modeling

    NASA Astrophysics Data System (ADS)

    Glascoe, Lee G.; Margraf, Jon; McMichael, Larry; Vandersall, Kevin S.

    2012-03-01

    A novel but relatively easy-to-implement mitigation concept to enforce standoff distance and reduce shock loading on a vertical, partially-submerged structure is evaluated experimentally using scaled aquarium experiments and numerically using a high-fidelity finite element code. Scaled, watertamped explosive experiments were performed using aquariums of different sizes. The effectiveness of different mitigation configurations, including air-filled media and an air gap, is assessed relative to an unmitigated detonation using the same charge weight and standoff distance. Experiments using an airfilled media mitigation concept effectively dampen the explosive response of an aluminum plate and reduce the final displacement at plate center by approximately half. Experiments using an air-gap resulted in a focused water slug hitting the plate, an effect we hypothesize to be due to water encasement of the charge. Finite element simulations used for the initial experimental design compare very well to experiments both spatially and temporally for the unmitigated case and for the air-filled media mitigation; simulations accounting for water encasement bound air gap experiments. Details of numerical and experimental approach are provided as well as a discussion of results.

  20. Experiments in concept modeling for radiographic image reports.

    PubMed Central

    Bell, D S; Pattison-Gordon, E; Greenes, R A

    1994-01-01

    OBJECTIVE: Development of methods for building concept models to support structured data entry and image retrieval in chest radiography. DESIGN: An organizing model for chest-radiographic reporting was built by analyzing manually a set of natural-language chest-radiograph reports. During model building, clinician-informaticians judged alternative conceptual structures according to four criteria: content of clinically relevant detail, provision for semantic constraints, provision for canonical forms, and simplicity. The organizing model was applied in representing three sample reports in their entirety. To explore the potential for automatic model discovery, the representation of one sample report was compared with the noun phrases derived from the same report by the CLARIT natural-language processing system. RESULTS: The organizing model for chest-radiographic reporting consists of 62 concept types and 17 relations, arranged in an inheritance network. The broadest types in the model include finding, anatomic locus, procedure, attribute, and status. Diagnoses are modeled as a subtype of finding. Representing three sample reports in their entirety added 79 narrower concept types. Some CLARIT noun phrases suggested valid associations among subtypes of finding, status, and anatomic locus. CONCLUSIONS: A manual modeling process utilizing explicitly stated criteria for making modeling decisions produced an organizing model that showed consistency in early testing. A combination of top-down and bottom-up modeling was required. Natural-language processing may inform model building, but algorithms that would replace manual modeling were not discovered. Further progress in modeling will require methods for objective model evaluation and tools for formalizing the model-building process. PMID:7719807

  1. HCCI experiments with toluene reference fuels modeled by a semidetailed chemical kinetic model

    SciTech Connect

    Andrae, J.C.G.; Brinck, T.; Kalghatgi, G.T.

    2008-12-15

    A semidetailed mechanism (137 species and 633 reactions) and new experiments in a homogeneous charge compression ignition (HCCI) engine on the autoignition of toluene reference fuels are presented. Skeletal mechanisms for isooctane and n-heptane were added to a detailed toluene submechanism. The model shows generally good agreement with ignition delay times measured in a shock tube and a rapid compression machine and is sensitive to changes in temperature, pressure, and mixture strength. The addition of reactions involving the formation and destruction of benzylperoxide radical was crucial to modeling toluene shock tube data. Laminar burning velocities for benzene and toluene were well predicted by the model after some revision of the high-temperature chemistry. Moreover, laminar burning velocities of a real gasoline at 353 and 500 K could be predicted by the model using a toluene reference fuel as a surrogate. The model also captures the experimentally observed differences in combustion phasing of toluene/n-heptane mixtures, compared to a primary reference fuel of the same research octane number, in HCCI engines as the intake pressure and temperature are changed. For high intake pressures and low intake temperatures, a sensitivity analysis at the moment of maximum heat release rate shows that the consumption of phenoxy radicals is rate-limiting when a toluene/n-heptane fuel is used, which makes this fuel more resistant to autoignition than the primary reference fuel. Typical CPU times encountered in zero-dimensional calculations were on the order of seconds and minutes in laminar flame speed calculations. Cross reactions between benzylperoxy radicals and n-heptane improved the model predictions of shock tube experiments for {phi}=1.0 and temperatures lower than 800 K for an n-heptane/toluene fuel mixture, but cross reactions had no influence on HCCI simulations. (author)

  2. Caesium sorption by hydrated cement as a function of degradation state: experiments and modelling.

    PubMed

    Ochs, M; Pointeau, I; Giffaut, E

    2006-01-01

    To provide reliable K(d) data for Cs required for the performance assessment of cement-based radioactive waste repositories, two complementary approaches were followed. First, Cs sorption was determined on a range of hydrated cement paste (HCP) and mortar samples of CEM I and CEM V for different degradation states and solution compositions, as well as on some single mineral phases. Second, a surface complexation-diffuse layer model previously developed by Pointeau et al. [Pointeau, I., Marmier, N., Fromage, F., Fedoroff, M., Giffaut, E., 2001. Cs and Pb uptake by CSH phases of hydrated cement. Material Research Society Symposium Proceedings, 663, 105-113] for Cs sorption on synthetic CSH phases was simplified to facilitate its application to whole HCP and mortars or concrete, following re-assessment of the model parameters. All measurements were compared with model predictions. The sorption data obtained on the different solid phases as a function of conditions corroborate that CSH minerals are the main sorbing phase for Cs in HCP. The data also clearly show the important influence of pH and the dissolved concentration of Na, K and Ca on K(d). It is further suggested that a decrease of pH is concomitant with a decrease of the Ca/Si ratio and a corresponding increase in surface sites with high affinity for Cs and, thus, K(d). Elevated concentrations of cations able to compete with Cs for these sites lead to a decrease of K(d), on the other hand. The simplified model was applied to the sorption measurements performed within this study as well as to a variety of literature data, mainly K(d) values for a variety of fresh HCP and mortar or concrete samples based on different samples of Ordinary Portland Cement as well as blended cements. The results show that the model can be applied reasonably well to a very large variety of conditions in terms of solid and solution compositions that cover a range of K(d) values from 10(-4) to ca. 3.2m(3)/kg. The large scatter

  3. Mixed discrete-continuum models: A summary of experiences in test interpretation and model prediction

    NASA Astrophysics Data System (ADS)

    Carrera, Jesus; Martinez-Landa, Lurdes

    A number of conceptual models have been proposed for simulating groundwater flow and solute transport in fractured systems. They span the range from continuum porous equivalents to discrete channel networks. The objective of this paper is to show the application of an intermediate approach (mixed discrete-continuum models) to three cases. The approach consists of identifying the dominant fractures (i.e., those carrying most of the flow) and modeling them explicitly as two-dimensional features embedded in a three-dimensional continuum representing the remaining fracture network. The method is based on the observation that most of the water flows through a few fractures, so that explicitly modeling them should help in properly accounting for a large portion of the total water flow. The applicability of the concept is tested in three cases. The first one refers to the Chalk River Block (Canada) in which a model calibrated against a long crosshole test successfully predicted the response to other tests performed in different fractures. The second case refers to hydraulic characterization of a large-scale (about 2 km) site at El Cabril (Spain). A model calibrated against long records (five years) of natural head fluctuations could be used to predict a one-month-long hydraulic test and heads variations after construction of a waste disposal site. The last case refers to hydraulic characterization performed at the Grimsel Test Site in the context of the Full-scale Engineered Barrier EXperiment (FEBEX). Extensive borehole and geologic mapping data were used to build a model that was calibrated against five crosshole tests. The resulting large-scale model predicted steady-state heads and inflows into the test tunnel. The conclusion is that, in all cases, the difficulties associated with the mixed discrete-continuum approach could be overcome and that the resulting models displayed some predictive capabilities.

  4. Uncertainty Analysis with Site Specific Groundwater Models: Experiences and Observations

    SciTech Connect

    Brewer, K.

    2003-07-15

    Groundwater flow and transport predictions are a major component of remedial action evaluations for contaminated groundwater at the Savannah River Site. Because all groundwater modeling results are subject to uncertainty from various causes; quantification of the level of uncertainty in the modeling predictions is beneficial to project decision makers. Complex site-specific models present formidable challenges for implementing an uncertainty analysis.

  5. Ultrasound-guided procedures in medical education: a fresh look at cadavers.

    PubMed

    Hoyer, Riley; Means, Russel; Robertson, Jeffrey; Rappaport, Douglas; Schmier, Charles; Jones, Travis; Stolz, Lori Ann; Kaplan, Stephen Jerome; Adamas-Rappaport, William Joaquin; Amini, Richard

    2016-04-01

    Demand for bedside ultrasound in medicine has created a need for earlier exposure to ultrasound education during the clinical years of undergraduate medical education. Although bedside ultrasound is often used for invasive medical procedures, there is no standardized educational model for procedural skills that can provide the learner a real-life simulated experience. The objective of our study was to describe a unique fresh cadaver preparation model, and to determine the impact of a procedure-focused ultrasound training session. This study was a cross-sectional study at an urban academic medical center. A sixteen-item questionnaire was administered at the beginning and end of the session. Fifty-five third year medical students participated in this 1-day event during their surgical clerkship. Students were trained to perform the following ultrasound-guided procedures: internal jugular vein cannulation, femoral vein cannulation femoral artery cannulation and pericardiocentesis. Preparation of the fresh cadaver is easily replicated and requires minor manipulation of cadaver vessels and pericardial space. Fifty-five medical students in their third year participated in this study. All of the medical students agreed that US could help increase their confidence in performing procedures in the future. Eighty percent (95 % CI 70-91 %) of students felt that there was a benefit of learning ultrasound-based anatomy in addition to traditional methods. Student confidence was self-rated on a five-point Likert scale. Student confidence increased with statistical significance in all of the skills taught. The most dramatic increase was noted in central venous line placement, which improved from 1.95 (SD = 0.11) to 4.2 (SD = 0.09) (p < 0.001). The use of fresh cadavers for procedure-focused US education is a realistic method that improves the confidence of third year medical students in performing complex but critical procedures.

  6. Dispersibility of crude oil in fresh water.

    PubMed

    Wrenn, B A; Virkus, A; Mukherjee, B; Venosa, A D

    2009-06-01

    The effects of surfactant composition on the ability of chemical dispersants to disperse crude oil in fresh water were investigated. The objective of this research was to determine whether effective fresh water dispersants can be designed in case this technology is ever considered for use in fresh water environments. Previous studies on the chemical dispersion of crude oil in fresh water neither identified the dispersants that were investigated nor described the chemistry of the surfactants used. This information is necessary for developing a more fundamental understanding of chemical dispersion of crude oil at low salinity. Therefore, we evaluated the relationship between surfactant chemistry and dispersion effectiveness. We found that dispersants can be designed to drive an oil slick into the freshwater column with the same efficiency as in salt water as long as the hydrophilic-lipophilic balance is optimum.

  7. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... freezing will not preclude use of the term “fresh frozen” to describe the food. “Quickly frozen” means frozen by a freezing system such as blast-freezing (sub-zero Fahrenheit temperature with fast moving...

  8. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... freezing will not preclude use of the term “fresh frozen” to describe the food. “Quickly frozen” means frozen by a freezing system such as blast-freezing (sub-zero Fahrenheit temperature with fast moving...

  9. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... freezing will not preclude use of the term “fresh frozen” to describe the food. “Quickly frozen” means frozen by a freezing system such as blast-freezing (sub-zero Fahrenheit temperature with fast moving...

  10. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... freezing will not preclude use of the term “fresh frozen” to describe the food. “Quickly frozen” means frozen by a freezing system such as blast-freezing (sub-zero Fahrenheit temperature with fast moving...

  11. Longitudinal Mode Aeroengine Combustion Instability: Model and Experiment

    NASA Technical Reports Server (NTRS)

    Cohen, J. M.; Hibshman, J. R.; Proscia, W.; Rosfjord, T. J.; Wake, B. E.; McVey, J. B.; Lovett, J.; Ondas, M.; DeLaat, J.; Breisacher, K.

    2001-01-01

    Combustion instabilities in gas turbine engines are most frequently encountered during the late phases of engine development, at which point they are difficult and expensive to fix. The ability to replicate an engine-traceable combustion instability in a laboratory-scale experiment offers the opportunity to economically diagnose the problem more completely (to determine the root cause), and to investigate solutions to the problem, such as active control. The development and validation of active combustion instability control requires that the casual dynamic processes be reproduced in experimental test facilities which can be used as a test bed for control system evaluation. This paper discusses the process through which a laboratory-scale experiment and be designed to replicate an instability observed in a developmental engine. The scaling process used physically-based analyses to preserve the relevant geometric, acoustic, and thermo-fluid features, ensuring that results achieved in the single-nozzle experiment will be scalable to the engine.

  12. Modeling and design for a new ionospheric modification experiment

    NASA Astrophysics Data System (ADS)

    Sales, Gary S.; Platt, Ian G.; Haines, D. Mark; Huang, Yuming; Heckscher, John L.

    1990-10-01

    Plans are now underway to carry out new high frequency oblique ionospheric modification experiments with increased radiated power using a new high gain antenna system and a 1 MW transmitter. The output of this large transmitting system will approach 90 dBW. An important part of this program is to determine the existence of threshold for nonlinear effects by varying the transmitter output. For these experiments, a high frequency probe system, a low power oblique sounder, is introduced to be used along the same propagation path as the high power disturbing transmitter. The concept was first used by soviet researchers to insure that this diagnostic signal always passes through the modified region of the ionosphere. The HF probe system will use a low power (150 W) CW signal shifted by approximately 40 kHz from the frequency used by the high power system. The transmitter for the probe system will be at the same location as the multiple antennas to measure the vertical and azimuthal angle of arrival as well as the Doppler frequency shift of the arriving probe signal. The three antenna array will be in an 'L' configuration to measure the phase differences between the antennas. At the midpath point a vertical sounder will provide the ionospheric information necessary for the frequency management of the experiment. Real-time processing will permit the site operators to evaluate the performance of the system and make adjustments during the experiment. A special ray tracing computer will be used to provide real-time frequencies and elevation beam steering during the experiment. A description of the system and the analysis used in the design of the experiment are presented.

  13. General circulation model sensitivity experiments with pole-centered supercontinents

    SciTech Connect

    Crowley, T.J.; Baum, S.K.; Kim, Kwang-Yul )

    1993-05-20

    The authors present model studies related to the general question of whether there could have been nearly ice-free climates in the past history of the Earth. Energy balance models and general circulation model calculations have addressed this question. In general this appears impossible, even with moving continents around, without postulating enhanced levels of CO[sub 2]. Early work indicated that pole centered continents could have snow free summers, but later work, with models with better physics, but poorer resolution seemed to contradict this conclusion. The authors apply the GENESIS (ver 1.02) general circulation model to this problem. Their conclusion is that with certain modifications to the application of this model, they could find pole-centered supercontinents which would be snow free in the summer.

  14. Experimental determination of circumferential properties of fresh carotid artery plaques.

    PubMed

    Lawlor, Michael G; O'Donnell, Michael R; O'Connell, Barry M; Walsh, Michael T

    2011-06-01

    Carotid endarterectomy (CEA) is currently accepted as the gold standard for interventional revascularisation of diseased arteries belonging to the carotid bifurcation. Despite the proven efficacy of CEA, great interest has been generated in carotid angioplasty and stenting (CAS) as an alternative to open surgical therapy. CAS is less invasive compared with CEA, and has the potential to successfully treat lesions close to the aortic arch or distal internal carotid artery (ICA). Following promising results from two recent trials (CREST; Carotid revascularisation endarterectomy versus stenting trial, and ICSS; International carotid stenting study) it is envisaged that there will be a greater uptake in carotid stenting, especially amongst the group who do not qualify for open surgical repair, thus creating pressure to develop computational models that describe a multitude of plaque models in the carotid arteries and their reaction to the deployment of such interventional devices. Pertinent analyses will require fresh human atherosclerotic plaque material characteristics for different disease types. This study analysed atherosclerotic plaque characteristics from 18 patients tested on site, post-surgical revascularisation through endarterectomy, with 4 tissue samples being excluded from tensile testing based on large width-length ratios. According to their mechanical behaviour, atherosclerotic plaques were separated into 3 grades of stiffness. Individual and group material coefficients were then generated analytically using the Yeoh strain energy function. The ultimate tensile strength (UTS) of each sample was also recorded, showing large variation across the 14 atherosclerotic samples tested. Experimental Green strains at rupture varied from 0.299 to 0.588 and the Cauchy stress observed in the experiments was between 0.131 and 0.779 MPa. It is expected that this data may be used in future design optimisation of next generation interventional medical devices for the

  15. Hazardous materials in Fresh Kills landfill

    SciTech Connect

    Hirschhorn, J.S.

    1997-12-31

    No environmental monitoring and corrective action programs can pinpoint multiple locations of hazardous materials the total amount of them in a large landfill. Yet the consequences of hazardous materials in MSW landfills are considerable, in terms of public health concerns, environmental damage, and cleanup costs. In this paper a rough estimation is made of how much hazardous material may have been disposed in Fresh Kills landfill in Staten Island, New York. The logic and methods could be used for other MSW landfills. Fresh Kills has frequently been described as the world`s largest MSW landfill. While records of hazardous waste disposal at Fresh Kills over nearly 50 years of operation certainly do not exist, no reasonable person would argue with the conclusion that large quantities of hazardous waste surely have been disposed at Fresh Kills, both legally and illegally. This study found that at least 2 million tons of hazardous wastes and substances have been disposed at Fresh Kills since 1948. Major sources are: household hazardous waste, commercial RCRA hazardous waste, incinerator ash, and commercial non-RCRA hazardous waste, governmental RCRA hazardous waste. Illegal disposal of hazardous waste surely has contributed even more. This is a sufficient amount to cause serious environmental contamination and releases, especially from such a landfill without an engineered liner system, for example. This figure is roughly 1% of the total amount of waste disposed in Fresh Kills since 1948, probably at least 200 million tons.

  16. Applying the Job Characteristics Model to the College Education Experience

    ERIC Educational Resources Information Center

    Kass, Steven J.; Vodanovich, Stephen J.; Khosravi, Jasmine Y.

    2011-01-01

    Boredom is one of the most common complaints among university students, with studies suggesting its link to poor grades, drop out, and behavioral problems. Principles borrowed from industrial-organizational psychology may help prevent boredom and enrich the classroom experience. In the current study, we applied the core dimensions of the job…

  17. The Chinese Experience: From Yellow Peril to Model Minority

    ERIC Educational Resources Information Center

    Wong, Legan

    1976-01-01

    Argues that for too long the experiences of the Chinese population in America have been either shrouded in misconception or totally ignored, and that this country must recognize and deal with the issues affecting this community. Learning about Chinese Americans will allow us to reexamine governmental policies towards racial and ethnic groups and…

  18. A Model for an Introductory Undergraduate Research Experience

    ERIC Educational Resources Information Center

    Canaria, Jeffrey A.; Schoffstall, Allen M.; Weiss, David J.; Henry, Renee M.; Braun-Sand, Sonja B.

    2012-01-01

    An introductory, multidisciplinary lecture-laboratory course linked with a summer research experience has been established to provide undergraduate biology and chemistry majors with the skills needed to be successful in the research laboratory. This three-credit hour course was focused on laboratory skills and was designed to reinforce and develop…

  19. Composing a model of outer space through virtual experiences

    NASA Astrophysics Data System (ADS)

    Aguilera, Julieta C.

    2015-03-01

    This paper frames issues of trans-scalar perception in visualization, reflecting on the limits of the human senses, particularly those which are related to space, and describe planetarium shows, presentations, and exhibit experiences of spatial immersion and interaction in real time.

  20. Experiments and Modeling of Evaporating/Condensing Menisci

    NASA Technical Reports Server (NTRS)

    Plawsky, Joel; Wayner, Peter C., Jr.

    2013-01-01

    Discuss the Constrained Vapor Bubble (CVB) experiment and how it aims to achieve a better understanding of the physics of evaporation and condensation and how they affect cooling processes in microgravity using a remotely controlled microscope and a small cooling device.

  1. The long-term fate of fresh and frozen orthotopic bone allografts in genetically defined rats.

    PubMed

    Bos, G D; Goldberg, V M; Gordon, N H; Dollinger, B M; Zika, J M; Powell, A E; Heiple, K G

    1985-01-01

    Fresh and frozen orthotopic iliac crest bone grafts in rats were studied histologically for determination of the long-term effects of histocompatibility matching and the freezing process on orthotopic bone graft incorporation. Grafts exchanged between groups of inbred rats, syngeneic or differing with respect to major or minor histocompatibility loci, were studied histologically at 20, 30, 40, 50, and 150 days after bone transplantation. A numerical histologic scoring system was developed and used by three observers for evaluation of coded hematoxylin and eosin sections. All frozen graft groups had the same fate regardless of histocompatibility relations between donors and recipients, and all grafts were inferior to fresh syngeneic grafts. Both fresh allograft groups received similar scores and initially at 20 and 30 days had scores similar to those of the fresh syngeneic groups. In the later intervals, however, the fresh allografts were inferior to the fresh syngeneic grafts and similar to the frozen groups. This is consistent with an older model describing two distinct phases of osteogenesis. In the long term, frozen syngeneic and fresh and frozen allografts across major and minor histocompatibility barriers were comparable, but all were significantly inferior to fresh syngeneic bone grafts.

  2. Teaching through Modeling: Four Schools' Experiences in Sustainability Education

    ERIC Educational Resources Information Center

    Higgs, Amy Lyons; McMillan, Victoria M.

    2006-01-01

    In this article, the authors examine how 4 innovative secondary schools model sustainable practices to their students. During school visits, the authors conducted interviews, observed daily life, and reviewed school documents. They found that modeling is a valuable approach to sustainability education, promoting both learning about sustainability…

  3. Working Towards Explicit Modelling: Experiences of a New Teacher Educator

    ERIC Educational Resources Information Center

    White, Elizabeth

    2011-01-01

    As a new teacher educator of beginner teachers on the Graduate Teacher Programme in a large School of Education in a UK university, I have reflected on how I have been able to develop the effectiveness of modelling good professional practice to student-teachers. In this paper I will present ways in which I have made modelling more explicit, how…

  4. A controlled experiment in ground water flow model calibration

    USGS Publications Warehouse

    Hill, M.C.; Cooley, R.L.; Pollock, D.W.

    1998-01-01

    Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic

  5. Pliocene Model Intercomparison Project (PlioMIP): Experimental Design and Boundary Conditions (Experiment 2)

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  6. Pliocene Model Intercomparison Project (PlioMIP): experimental design and boundary conditions (Experiment 2)

    USGS Publications Warehouse

    Haywood, A.M.; Dowsett, H.J.; Robinson, M.M.; Stoll, D.K.; Dolan, A.M.; Lunt, D.J.; Otto-Bliesner, B.; Chandler, M.A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere-only climate models. The second (Experiment 2) utilises fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  7. Design and analysis of numerical experiments. [applicable to fully nonlinear, global, equivalent-barotropic model

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Sacks, Jerome; Chang, Yue-Fang

    1993-01-01

    Methods for the design and analysis of numerical experiments that are especially useful and efficient in multidimensional parameter spaces are presented. The analysis method, which is similar to kriging in the spatial analysis literature, fits a statistical model to the output of the numerical model. The method is applied to a fully nonlinear, global, equivalent-barotropic dynamical model. The statistical model also provides estimates for the uncertainty of predicted numerical model output, which can provide guidance on where in the parameter space to conduct further experiments, if necessary. The method can provide significant improvements in the efficiency with which numerical sensitivity experiments are conducted.

  8. Fresh Water Content Variability in the Arctic Ocean

    NASA Technical Reports Server (NTRS)

    Hakkinen, Sirpa; Proshutinsky, Andrey

    2003-01-01

    Arctic Ocean model simulations have revealed that the Arctic Ocean has a basin wide oscillation with cyclonic and anticyclonic circulation anomalies (Arctic Ocean Oscillation; AOO) which has a prominent decadal variability. This study explores how the simulated AOO affects the Arctic Ocean stratification and its relationship to the sea ice cover variations. The simulation uses the Princeton Ocean Model coupled to sea ice. The surface forcing is based on NCEP-NCAR Reanalysis and its climatology, of which the latter is used to force the model spin-up phase. Our focus is to investigate the competition between ocean dynamics and ice formation/melt on the Arctic basin-wide fresh water balance. We find that changes in the Atlantic water inflow can explain almost all of the simulated fresh water anomalies in the main Arctic basin. The Atlantic water inflow anomalies are an essential part of AOO, which is the wind driven barotropic response to the Arctic Oscillation (AO). The baroclinic response to AO, such as Ekman pumping in the Beaufort Gyre, and ice meldfreeze anomalies in response to AO are less significant considering the whole Arctic fresh water balance.

  9. CELSS experiment model and design concept of gas recycle system

    NASA Technical Reports Server (NTRS)

    Nitta, K.; Oguchi, M.; Kanda, S.

    1986-01-01

    In order to prolong the duration of manned missions around the Earth and to expand the human existing region from the Earth to other planets such as a Lunar Base or a manned Mars flight mission, the controlled ecological life support system (CELSS) becomes an essential factor of the future technology to be developed through utilization of space station. The preliminary system engineering and integration efforts regarding CELSS have been carried out by the Japanese CELSS concept study group for clarifying the feasibility of hardware development for Space station experiments and for getting the time phased mission sets after FY 1992. The results of these studies are briefly summarized and the design and utilization methods of a Gas Recycle System for CELSS experiments are discussed.

  10. QSAR Models for Regulatory Purposes: Experiences and Perspectives

    NASA Astrophysics Data System (ADS)

    Benfenati, Emilio

    Quantitative structure-activity relationships (QSARs) are more and more discussed and used in several situations. Their application to legislative purposes stimulated a large debate in Europe on the recent legislation on industrial chemicals. To correctly assess the suitability of QSAR, the discussion has to be done depending on the target. Different targets modify the model evaluation and use. The application of QSAR for legislative purposes requires keeping into account the use of the values obtained through the QSAR models. False negatives should be minimized. The model should be robust, verified, and validated. Reproducibility and transparency are other important characteristics.

  11. Looking beyond Lewis Structures: A General Chemistry Molecular Modeling Experiment Focusing on Physical Properties and Geometry

    ERIC Educational Resources Information Center

    Linenberger, Kimberly J.; Cole, Renee S.; Sarkar, Somnath

    2011-01-01

    We present a guided-inquiry experiment using Spartan Student Version, ready to be adapted and implemented into a general chemistry laboratory course. The experiment provides students an experience with Spartan Molecular Modeling software while discovering the relationships between the structure and properties of molecules. Topics discussed within…

  12. Quality and shelf-life prediction for retail fresh hake (Merluccius merluccius).

    PubMed

    García, Míriam R; Vilas, Carlos; Herrera, Juan R; Bernárdez, Marta; Balsa-Canto, Eva; Alonso, Antonio A

    2015-09-01

    Fish quality has a direct impact on market price and its accurate assessment and prediction are of main importance to set prices, increase competitiveness, resolve conflicts of interest and prevent food wastage due to conservative product shelf-life estimations. In this work we present a general methodology to derive predictive models of fish freshness under different storage conditions. The approach makes use of the theory of optimal experimental design, to maximize data information and in this way reduce the number of experiments. The resulting growth model for specific spoilage microorganisms in hake (Merluccius merluccius) is sufficiently informative to estimate quality sensory indexes under time-varying temperature profiles. In addition it incorporates quantitative information of the uncertainty induced by fish variability. The model has been employed to test the effect of factors such as fishing gear or evisceration, on fish spoilage and therefore fish quality. Results show no significant differences in terms of microbial growth between hake fished by long-line or bottom-set nets, within the implicit uncertainty of the model. Similar conclusions can be drawn for gutted and un-gutted hake along the experiment horizon. In addition, whenever there is the possibility to carry out the necessary experiments, this approach is sufficiently general to be used in other fish species and under different stress variables.

  13. Establishing the Global Fresh Water Sensor Web

    NASA Technical Reports Server (NTRS)

    Hildebrand, Peter H.

    2005-01-01

    This paper presents an approach to measuring the major components of the water cycle from space using the concept of a sensor-web of satellites that are linked to a data assimilation system. This topic is of increasing importance, due to the need for fresh water to support the growing human population, coupled with climate variability and change. The net effect is that water is an increasingly valuable commodity. The distribution of fresh water is highly uneven over the Earth, with both strong latitudinal distributions due to the atmospheric general circulation, and even larger variability due to landforms and the interaction of land with global weather systems. The annual global fresh water budget is largely a balance between evaporation, atmospheric transport, precipitation and runoff. Although the available volume of fresh water on land is small, the short residence time of water in these fresh water reservoirs causes the flux of fresh water - through evaporation, atmospheric transport, precipitation and runoff - to be large. With a total atmospheric water store of approx. 13 x 10(exp 12)cu m, and an annual flux of approx. 460 x 10(exp 12)cu m/y, the mean atmospheric residence time of water is approx. 10 days. River residence times are similar, biological are approx. 1 week, soil moisture is approx. 2 months, and lakes and aquifers are highly variable, extending from weeks to years. The hypothesized potential for redistribution and acceleration of the global hydrological cycle is therefore of concern. This hypothesized speed-up - thought to be associated with global warming - adds to the pressure placed upon water resources by the burgeoning human population, the variability of weather and climate, and concerns about anthropogenic impacts on global fresh water availability.

  14. "Comments on Slavin": Through the Looking Glass--Experiments, Quasi-Experiments, and the Medical Model

    ERIC Educational Resources Information Center

    Sloane, Finbarr

    2008-01-01

    Slavin (2008) has called for changing the criteria used for the inclusion of basic research in national research synthesis clearinghouses. The author of this article examines a number of the assumptions made by Slavin, provides critique with alternatives, and asks what it means to fully implement the medical model in educational settings.…

  15. Experience and Cultural Models Matter: Placing Firm Limits on Childhood Anthropocentrism

    ERIC Educational Resources Information Center

    Waxman, Sandra; Medin, Douglas

    2007-01-01

    This paper builds on Hatano and Inagaki's pioneering work on the role of experience and cultural models in children's biological reasoning. We use a category-based induction task to consider how experience and cultural models shape rural and urban children's patterns of biological reasoning. We discuss the implications of these findings for…

  16. Mathematical modelling of microtumour infiltration based on in vitro experiments.

    PubMed

    Luján, Emmanuel; Guerra, Liliana N; Soba, Alejandro; Visacovsky, Nicolás; Gandía, Daniel; Calvo, Juan C; Suárez, Cecilia

    2016-08-01

    The present mathematical models of microtumours consider, in general, volumetric growth and spherical tumour invasion shapes. Nevertheless in many cases, such as in gliomas, a need for more accurate delineation of tumour infiltration areas in a patient-specific manner has arisen. The objective of this study was to build a mathematical model able to describe in a case-specific way as well as to predict in a probabilistic way the growth and the real invasion pattern of multicellular tumour spheroids (in vitro model of an avascular microtumour) immersed in a collagen matrix. The two-dimensional theoretical model was represented by a reaction-convection-diffusion equation that considers logistic proliferation, volumetric growth, a rim with proliferative cells at the tumour surface and invasion with diffusive and convective components. Population parameter values of the model were extracted from the experimental dataset and a shape function that describes the invasion area was derived from each experimental case by image processing. New possible and aleatory shape functions were generated by data mining and Monte Carlo tools by means of a satellite EGARCH model, which were fed with all the shape functions of the dataset. Then the main model is used in two different ways: to reproduce the growth and invasion of a given experimental tumour in a case-specific manner when fed with the corresponding shape function (descriptive simulations) or to generate new possible tumour cases that respond to the general population pattern when fed with an aleatory-generated shape function (predictive simulations). Both types of simulations are in good agreement with empirical data, as it was revealed by area quantification and Bland-Altman analysis. This kind of experimental-numerical interaction has wide application potential in designing new strategies able to predict as much as possible the invasive behaviour of a tumour based on its particular characteristics and microenvironment

  17. Mathematical modelling of microtumour infiltration based on in vitro experiments.

    PubMed

    Luján, Emmanuel; Guerra, Liliana N; Soba, Alejandro; Visacovsky, Nicolás; Gandía, Daniel; Calvo, Juan C; Suárez, Cecilia

    2016-08-01

    The present mathematical models of microtumours consider, in general, volumetric growth and spherical tumour invasion shapes. Nevertheless in many cases, such as in gliomas, a need for more accurate delineation of tumour infiltration areas in a patient-specific manner has arisen. The objective of this study was to build a mathematical model able to describe in a case-specific way as well as to predict in a probabilistic way the growth and the real invasion pattern of multicellular tumour spheroids (in vitro model of an avascular microtumour) immersed in a collagen matrix. The two-dimensional theoretical model was represented by a reaction-convection-diffusion equation that considers logistic proliferation, volumetric growth, a rim with proliferative cells at the tumour surface and invasion with diffusive and convective components. Population parameter values of the model were extracted from the experimental dataset and a shape function that describes the invasion area was derived from each experimental case by image processing. New possible and aleatory shape functions were generated by data mining and Monte Carlo tools by means of a satellite EGARCH model, which were fed with all the shape functions of the dataset. Then the main model is used in two different ways: to reproduce the growth and invasion of a given experimental tumour in a case-specific manner when fed with the corresponding shape function (descriptive simulations) or to generate new possible tumour cases that respond to the general population pattern when fed with an aleatory-generated shape function (predictive simulations). Both types of simulations are in good agreement with empirical data, as it was revealed by area quantification and Bland-Altman analysis. This kind of experimental-numerical interaction has wide application potential in designing new strategies able to predict as much as possible the invasive behaviour of a tumour based on its particular characteristics and microenvironment.

  18. Wave chaotic experiments and models for complicated wave scattering systems

    NASA Astrophysics Data System (ADS)

    Yeh, Jen-Hao

    Wave scattering in a complicated environment is a common challenge in many engineering fields because the complexity makes exact solutions impractical to find, and the sensitivity to detail in the short-wavelength limit makes a numerical solution relevant only to a specific realization. On the other hand, wave chaos offers a statistical approach to understand the properties of complicated wave systems through the use of random matrix theory (RMT). A bridge between the theory and practical applications is the random coupling model (RCM) which connects the universal features predicted by RMT and the specific details of a real wave scattering system. The RCM gives a complete model for many wave properties and is beneficial for many physical and engineering fields that involve complicated wave scattering systems. One major contribution of this dissertation is that I have utilized three microwave systems to thoroughly test the RCM in complicated wave systems with varied loss, including a cryogenic system with a superconducting microwave cavity for testing the extremely-low-loss case. I have also experimentally tested an extension of the RCM that includes short-orbit corrections. Another novel result is development of a complete model based on the RCM for the fading phenomenon extensively studied in the wireless communication fields. This fading model encompasses the traditional fading models as its high-loss limit case and further predicts the fading statistics in the low-loss limit. This model provides the first physical explanation for the fitting parameters used in fading models. I have also applied the RCM to additional experimental wave properties of a complicated wave system, such as the impedance matrix, the scattering matrix, the variance ratio, and the thermopower. These predictions are significant for nuclear scattering, atomic physics, quantum transport in condensed matter systems, electromagnetics, acoustics, geophysics, etc.

  19. Modeling of Spherical Torus Plasmas for Liquid Lithium Wall Experiments

    SciTech Connect

    R. Kaita; S. Jardin; B. Jones; C. Kessel; R. Majeski; J. Spaleta; R. Woolley; L. Zakharo; B. Nelson; M. Ulrickson

    2002-01-29

    Liquid metal walls have the potential to solve first-wall problems for fusion reactors, such as heat load and erosion of dry walls, neutron damage and activation, and tritium inventory and breeding. In the near term, such walls can serve as the basis for schemes to stabilize magnetohydrodynamic (MHD) modes. Furthermore, the low recycling characteristics of lithium walls can be used for particle control. Liquid lithium experiments have already begun in the Current Drive eXperiment-Upgrade (CDX-U). Plasmas limited with a toroidally localized limiter have been investigated, and experiments with a fully toroidal lithium limiter are in progress. A liquid surface module (LSM) has been proposed for the National Spherical Torus Experiment (NSTX). In this larger ST, plasma currents are in excess of 1 MA and a typical discharge radius is about 68 cm. The primary motivation for the LSM is particle control, and options for mounting it on the horizontal midplane or in the divertor region are under consideration. A key consideration is the magnitude of the eddy currents at the location of a liquid lithium surface. During plasma start up and disruptions, the force due to such currents and the magnetic field can force a conducting liquid off of the surface behind it. The Tokamak Simulation Code (TSC) has been used to estimate the magnitude of this effect. This program is a two dimensional, time dependent, free boundary simulation code that solves the MHD equations for an axisymmetric toroidal plasma. From calculations that match actual ST equilibria, the eddy current densities can be determined at the locations of the liquid lithium. Initial results have shown that the effects could be significant, and ways of explicitly treating toroidally local structures are under investigation.

  20. Cardiovascular model for the simulation of exercise, lower body negative pressure, and tilt experiments

    NASA Technical Reports Server (NTRS)

    Croston, R. C.; Fitzjerrell, D. G.

    1974-01-01

    A mathematical model and digital computer simulation of the human cardiovascular system and its controls have been developed to simulate pulsatile dynamic responses to the cardiovascular experiments of the Skylab missions and to selected physiological stresses of manned space flight. Specific model simulations of the bicycle ergometry, lower body negative pressure, and tilt experiments have been developed and verified for 1-g response by comparison with available experimental data. The zero-g simulations of two Skylab experiments are discussed.

  1. Integrated predictive modelling simulations of burning plasma experiment designs

    NASA Astrophysics Data System (ADS)

    Bateman, Glenn; Onjun, Thawatchai; Kritz, Arnold H.

    2003-11-01

    Models for the height of the pedestal at the edge of H-mode plasmas (Onjun T et al 2002 Phys. Plasmas 9 5018) are used together with the Multi-Mode core transport model (Bateman G et al 1998 Phys. Plasmas 5 1793) in the BALDUR integrated predictive modelling code to predict the performance of the ITER (Aymar A et al 2002 Plasma Phys. Control. Fusion 44 519), FIRE (Meade D M et al 2001 Fusion Technol. 39 336), and IGNITOR (Coppi B et al 2001 Nucl. Fusion 41 1253) fusion reactor designs. The simulation protocol used in this paper is tested by comparing predicted temperature and density profiles against experimental data from 33 H-mode discharges in the JET (Rebut P H et al 1985 Nucl. Fusion 25 1011) and DIII-D (Luxon J L et al 1985 Fusion Technol. 8 441) tokamaks. The sensitivities of the predictions are evaluated for the burning plasma experimental designs by using variations of the pedestal temperature model that are one standard deviation above and below the standard model. Simulations of the fusion reactor designs are carried out for scans in which the plasma density and auxiliary heating power are varied.

  2. Interim Service ISDN Satellite (ISIS) network model for advanced satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.; Hager, E. Paul

    1991-01-01

    The Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) Network Model for Advanced Satellite Designs and Experiments describes a model suitable for discrete event simulations. A top-down model design uses the Advanced Communications Technology Satellite (ACTS) as its basis. The ISDN modeling abstractions are added to permit the determination and performance for the NASA Satellite Communications Research (SCAR) Program.

  3. Model Experiments with Slot Antenna Arrays for Imaging

    NASA Technical Reports Server (NTRS)

    Johansson, J. F.; Yngvesson, K. S.; Kollberg, E. L.

    1985-01-01

    A prototype imaging system at 31 GHz was developed, which employs a two-dimensional (5x5) array of tapered slot antennas, and integrated detector or mixer elements, in the focal plane of a prime-focus paraboloid reflector, with an f/D=1. The system can be scaled to shorter millimeter waves and submillimeter waves. The array spacing corresponds to a beam spacing of approximately one Rayleigh distance and a two-point resolution experiment showed that two point-sources at the Rayleigh distance are well resolved.

  4. Antimicrobial properties of natural substances in irradiated fresh poultry

    NASA Astrophysics Data System (ADS)

    Mahrour, A.; Lacroix, M.; Nketsa-Tabiri, J.; Calderon, N.; Gagnon, M.

    1998-06-01

    This study was undertaken to determine if a combined treatment (marinating in natural plant extracts or vacuum) with irradiation could have a synergetic effect, in order to reduce the dose required for complete elimination of Salmonella on fresh poultry. The effect of these combined treatments on the shelf-life extension was also evaluated. The fresh chicken legs were irradiated at 0, 3 and 5 kGy. The poultry underwent microbial analysis(mesophilic and Salmonella detection). For each treatment, the total microbial count decreased with increase of irradiation dose. The marinating treatment have a synergistic effect with irradiation treatment to reduce the total microbial count and controlling the proliferation during storage at 4°C. Irradiation of fresh chicken pieces with a dose of 3 kGy appears to be able to extend the microbial shelf-life by a factor of 2. When the chicken is marinating and irradiated at 3 kGy or when irradiated at 5 kGy without marinating, the microbial shelf-life is extended by a factor of 7 to 8. No Salmonella was found during all the experiment in the chicken in air and marinated. However, a presence of Salmonella was found in samples irradiated at 5 kGy under vacuum, in unirradiated samples and samples irradiated at 3kGy in air and under vacuum.

  5. The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements

    SciTech Connect

    Anderson, K.K.

    1994-05-01

    Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

  6. Modeling and analysis of pinhole occulter experiment: Initial study phase

    NASA Technical Reports Server (NTRS)

    Vandervoort, R. J.

    1985-01-01

    The feasibility of using a generic simulation, TREETOPS, to simulate the Pinhole/Occulter Facility (P/OF) to be tested on the space shuttle was demonstrated. The baseline control system was used to determine the pointing performance of the P/OF. The task included modeling the structure as a three body problem (shuttle-instrument pointing system- P/OP) including the flexibility of the 32 meter P/OF boom. Modeling of sensors, actuators, and control algorithms was also required. Detailed mathematical models for the structure, sensors, and actuators are presented, as well as the control algorithm and corresponding design procedure. Closed loop performance using this controller and computer listings for the simulator are also given.

  7. Period adding cascades: experiment and modeling in air bubbling.

    PubMed

    Pereira, Felipe Augusto Cardoso; Colli, Eduardo; Sartorelli, José Carlos

    2012-03-01

    Period adding cascades have been observed experimentally/numerically in the dynamics of neurons and pancreatic cells, lasers, electric circuits, chemical reactions, oceanic internal waves, and also in air bubbling. We show that the period adding cascades appearing in bubbling from a nozzle submerged in a viscous liquid can be reproduced by a simple model, based on some hydrodynamical principles, dealing with the time evolution of two variables, bubble position and pressure of the air chamber, through a system of differential equations with a rule of detachment based on force balance. The model further reduces to an iterating one-dimensional map giving the pressures at the detachments, where time between bubbles come out as an observable of the dynamics. The model has not only good agreement with experimental data, but is also able to predict the influence of the main parameters involved, like the length of the hose connecting the air supplier with the needle, the needle radius and the needle length.

  8. GPU acceleration experience with RRTMG long wave radiation model

    NASA Astrophysics Data System (ADS)

    Price, Erik; Mielikainen, Jarno; Huang, Bormin; Huang, HungLung A.; Lee, Tsengdar

    2013-10-01

    An Atmospheric radiative transfer model calculates radiative transfer of electromagnetic radiation through a planetary atmosphere. Both shortwave radiance and longwave radiance parameterizations in an atmospheric model calculate radiation fluxes and heating rates in the earth-atmospheric system. One radiative transfer model is the rapid radiative transfer model (RRTM), which calculates of longwave and shortwave atmospheric radiative fluxes and heating rates. Longwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, is based on the single-column reference code, RRTM. The RRTMG is a validated, correlated k-distribution band model for the calculation of longwave and shortwave atmospheric radiative fluxes and heating rates. The focus of this paper is on the RRTMG long wave (RRTMG_LW) model. In order to improve computational efficiency, RRTMG_LW incorporates several modifications compared to RRTM. In RRTM_LW there are 16 g points in each of the spectral bands for a total of 256 g points. In RRTMG_LW, the number of g points in each spectral band varies from 2 to 16 depending on the absorption in each band. RRTMG_LW employs a computationally efficient correlated-k method for radiative transfer calculations. It contains 16 spectral bands with various number of quadrature points (g points) in each of the bands. In total, there are 140 g points. The radiative effects of all significant atmospheric gases are included in RRTMG_LW. Active gas absorbers include H2O, O3, CO2, CH4, N2O, O2 and four types of halocarbons: CFC-11, CFC-12, CFC-22, and CCL4. RRTMG_LW also treats the absorption and scattering from liquid and ice clouds and aerosols. For cloudysky radiative transfer, a maximum-random cloud overlapping scheme is used. Small scale cloud variability, such as cloud fraction and the vertical overlap of clouds can be represented using a statistical technique in RRTMG_LW. Due to its accuracy, RRTMG_LW has been implemented operationally

  9. The fence experiment — a first evaluation of shelter models

    NASA Astrophysics Data System (ADS)

    Peña, Alfredo; Bechmann, Andreas; Conti, Davide; Angelou, Nikolas; Mann, Jakob

    2016-09-01

    We present a preliminary evaluation of shelter models of different degrees of complexity using full-scale lidar measurements of the shelter on a vertical plane behind and orthogonal to a fence. Model results accounting for the distribution of the relative wind direction within the observed direction interval are in better agreement with the observations than those that correspond to the simulation at the center of the direction interval, particularly in the far-wake region, for six vertical levels up to two fence heights. Generally, the CFD results are in better agreement with the observations than those from two engineering-like obstacle models but the latter two follow well the behavior of the observations in the far-wake region.

  10. Compressive behavior of a turtle's shell: experiment, modeling, and simulation.

    PubMed

    Damiens, R; Rhee, H; Hwang, Y; Park, S J; Hammi, Y; Lim, H; Horstemeyer, M F

    2012-02-01

    The turtle's shell acts as a protective armor for the animal. By analyzing a turtle shell via finite element analysis, one can obtain the strength and stiffness attributes to help design man-made armor. As such, finite element analysis was performed on a Terrapene carolina box turtle shell. Experimental data from compression tests were generated to provide insight into the scute through-thickness behavior of the turtle shell. Three regimes can be classified in terms of constitutive modeling: linear elastic, perfectly inelastic, and densification regions, where hardening occurs. For each regime, we developed a model that comprises elasticity and densification theory for porous materials and obtained all the material parameters by correlating the model with experimental data. The different constitutive responses arise as the deformation proceeded through three distinctive layers of the turtle shell carapace. Overall, the phenomenological stress-strain behavior is similar to that of metallic foams. PMID:22301179

  11. RF compensation of double Langmuir probes: modelling and experiment

    NASA Astrophysics Data System (ADS)

    Caneses, Juan F.; Blackwell, Boyd

    2015-06-01

    An analytical model describing the physics of driven floating probes has been developed to model the RF compensation of the double langmuir probe (DLP) technique. The model is based on the theory of the RF self-bias effect as described in Braithwaite’s work [1], which we extend to include time-resolved behaviour. The main contribution of this work is to allow quantitative determination of the intrinsic RF compensation of a DLP in a given RF discharge. Using these ideas, we discuss the design of RF compensated DLPs. Experimental validation for these ideas is presented and the effects of RF rectification on DLP measurements are discussed. Experimental results using RF rectified DLPs indicate that (1) whenever sheath thickness effects are important overestimation of the ion density is proportional to the level of RF rectification and suggest that (2) the electron temperature measurement is only weakly affected.

  12. Nonlinear time reversal of classical waves: experiment and model.

    PubMed

    Frazier, Matthew; Taddese, Biniyam; Xiao, Bo; Antonsen, Thomas; Ott, Edward; Anlage, Steven M

    2013-12-01

    We consider time reversal of electromagnetic waves in a closed, wave-chaotic system containing a discrete, passive, harmonic-generating nonlinearity. An experimental system is constructed as a time-reversal mirror, in which excitations generated by the nonlinearity are gathered, time-reversed, transmitted, and directed exclusively to the location of the nonlinearity. Here we show that such nonlinear objects can be purely passive (as opposed to the active nonlinearities used in previous work), and we develop a higher data rate exclusive communication system based on nonlinear time reversal. A model of the experimental system is developed, using a star-graph network of transmission lines, with one of the lines terminated by a model diode. The model simulates time reversal of linear and nonlinear signals, demonstrates features seen in the experimental system, and supports our interpretation of the experimental results.

  13. Inhomogeneous Gain Saturation in EDF: Experiment and Modeling

    NASA Astrophysics Data System (ADS)

    Peretti, Romain; Jacquier, Bernard; Boivin, David; Burov, Ekaterina; Jurdyc, Anne-Marie

    2011-05-01

    Erbium-Doped Fiber Amplifiers can present holes in spectral gain in Wavelength Division Multiplexing operation. The origin of this inhomogeneous saturation behavior is still a subject of controversy. In this paper we present both an experimental methods and a gain's model. Our experimental method allow us to measure the first homogeneous linewidth of the 1.5 $\\mu$m erbium emission with gain spectral hole burning consistently with the other measurement in the literature and the model explains the differences observed in literature between GSHB and other measurement methods.

  14. Tapped granular column dynamics: simulations, experiments and modeling

    NASA Astrophysics Data System (ADS)

    Rosato, A. D.; Zuo, L.; Blackmore, D.; Wu, H.; Horntrop, D. J.; Parker, D. J.; Windows-Yule, C.

    2016-07-01

    This paper communicates the results of a synergistic investigation that initiates our long term research goal of developing a continuum model capable of predicting a variety of granular flows. We consider an ostensibly simple system consisting of a column of inelastic spheres subjected to discrete taps in the form of half sine wave pulses of amplitude a/ d and period τ . A three-pronged approach is used, consisting of discrete element simulations based on linear loading-unloading contacts, experimental validation, and preliminary comparisons with our continuum model in the form of an integro-partial differential equation.

  15. Dose-response model for teratological experiments involving quantal responses

    SciTech Connect

    Rai, K.; Van Ryzin, J.

    1985-03-01

    This paper introduces a dose-response model for teratological quantal response data where the probability of response for an offspring from a female at a given dose varies with the litter size. The maximum likelihood estimators for the parameters of the model are given as the solution of a nonlinear iterative algorithm. Two methods of low-dose extrapolation are presented, one based on the litter size distribution and the other a conservative method. The resulting procedures are then applied to a teratological data set from the literature.

  16. Fresh groundwater resources in a large sand replenishment

    NASA Astrophysics Data System (ADS)

    Huizer, Sebastian; Oude Essink, Gualbert H. P.; Bierkens, Marc F. P.

    2016-08-01

    The anticipation of sea-level rise and increases in extreme weather conditions has led to the initiation of an innovative coastal management project called the Sand Engine. In this pilot project a large volume of sand (21.5 million m3) - also called sand replenishment or nourishment - was placed on the Dutch coast. The intention is that the sand is redistributed by wind, current, and tide, reinforcing local coastal defence structures and leading to a unique, dynamic environment. In this study we investigated the potential effect of the long-term morphological evolution of the large sand replenishment and climate change on fresh groundwater resources. The potential effects on the local groundwater system were quantified with a calibrated three-dimensional (3-D) groundwater model, in which both variable-density groundwater flow and salt transport were simulated. Model simulations showed that the long-term morphological evolution of the Sand Engine results in a substantial growth of fresh groundwater resources, in all adopted climate change scenarios. Thus, the application of a local sand replenishment could provide coastal areas the opportunity to combine coastal protection with an increase of the local fresh groundwater availability.

  17. Fresh fruit: microstructure, texture, and quality

    NASA Astrophysics Data System (ADS)

    Wood, Delilah F.; Imam, Syed H.; Orts, William J.; Glenn, Gregory M.

    2009-05-01

    Fresh-cut produce has a huge following in today's supermarkets. The trend follows the need to decrease preparation time as well as the desire to follow the current health guidelines for consumption of more whole "heart-healthy" foods. Additionally, consumers are able to enjoy a variety of fresh produce regardless of the local season because produce is now shipped world-wide. However, most fruits decompose rapidly once their natural packaging has been disrupted by cutting. In addition, some intact fruits have limited shelf-life which, in turn, limits shipping and storage. Therefore, a basic understanding of how produce microstructure relates to texture and how microstructure changes as quality deteriorates is needed to ensure the best quality in the both the fresh-cut and the fresh produce markets. Similarities between different types of produce include desiccation intolerance which produces wrinkling of the outer layers, cracking of the cuticle and increased susceptibility to pathogen invasion. Specific examples of fresh produce and their corresponding ripening and storage issues, and degradation are shown in scanning electron micrographs.

  18. Preservation technologies for fresh meat - a review.

    PubMed

    Zhou, G H; Xu, X L; Liu, Y

    2010-09-01

    Fresh meat is a highly perishable product due to its biological composition. Many interrelated factors influence the shelf life and freshness of meat such as holding temperature, atmospheric oxygen (O(2)), endogenous enzymes, moisture, light and most importantly, micro-organisms. With the increased demand for high quality, convenience, safety, fresh appearance and an extended shelf life in fresh meat products, alternative non-thermal preservation technologies such as high hydrostatic pressure, superchilling, natural biopreservatives and active packaging have been proposed and investigated. Whilst some of these technologies are efficient at inactivating the micro-organisms most commonly related to food-borne diseases, they are not effective against spores. To increase their efficacy against vegetative cells, a combination of several preservation technologies under the so-called hurdle concept has also been investigated. The objective of this review is to describe current methods and developing technologies for preserving fresh meat. The benefits of some new technologies and their industrial limitations is presented and discussed. PMID:20605688

  19. Preservation technologies for fresh meat - a review.

    PubMed

    Zhou, G H; Xu, X L; Liu, Y

    2010-09-01

    Fresh meat is a highly perishable product due to its biological composition. Many interrelated factors influence the shelf life and freshness of meat such as holding temperature, atmospheric oxygen (O(2)), endogenous enzymes, moisture, light and most importantly, micro-organisms. With the increased demand for high quality, convenience, safety, fresh appearance and an extended shelf life in fresh meat products, alternative non-thermal preservation technologies such as high hydrostatic pressure, superchilling, natural biopreservatives and active packaging have been proposed and investigated. Whilst some of these technologies are efficient at inactivating the micro-organisms most commonly related to food-borne diseases, they are not effective against spores. To increase their efficacy against vegetative cells, a combination of several preservation technologies under the so-called hurdle concept has also been investigated. The objective of this review is to describe current methods and developing technologies for preserving fresh meat. The benefits of some new technologies and their industrial limitations is presented and discussed.

  20. Prediction of tomato freshness using infrared thermal imaging and transient step heating

    NASA Astrophysics Data System (ADS)

    Xie, Jing; Hsieh, Sheng-Jen; Tan, Zuojun; Wang, Hongjin; Zhang, Jian

    2016-05-01

    Tomatoes are the world's 8th most valuable agricultural product, valued at $58 billion dollars annually. Nondestructive testing and inspection of tomatoes is challenging and multi-faceted. Optical imaging is used for quality grading and ripeness. Spectral and hyperspectral imaging are used to detect surface detects and cuticle cracks. Infrared thermography has been used to distinguish between different stages of maturity. However, determining the freshness of tomatoes is still an open problem. For this research, infrared thermography was used for freshness prediction. Infrared images were captured at a rate of 1 frame per second during heating (0 to 40 seconds) and cooling (0 to 160 seconds). The absolute temperatures of the acquired images were plotted. Regions with higher temperature differences between fresh and less fresh (rotten within three days) tomatoes of approximately uniform size and shape were used as the input nodes in a three-layer artificial neural network (ANN) model. Two-thirds of the data were used for training and one-third was used for testing. Results suggest that by using infrared imaging data as input to an ANN model, tomato freshness can be predicted with 90% accuracy. T-tests and F-tests were conducted based on absolute temperature over time. The results suggest that there is a mean temperature difference between fresh and less fresh tomatoes (α = 0.05). However, there is no statistical difference in terms of temperature variation, which suggests a water concentration difference.

  1. Tutoring and Multi-Agent Systems: Modeling from Experiences

    ERIC Educational Resources Information Center

    Bennane, Abdellah

    2010-01-01

    Tutoring systems become complex and are offering varieties of pedagogical software as course modules, exercises, simulators, systems online or offline, for single user or multi-user. This complexity motivates new forms and approaches to the design and the modelling. Studies and research in this field introduce emergent concepts that allow the…

  2. Partnering the University Field Experience Research Model with Action Research.

    ERIC Educational Resources Information Center

    Schnorr, Donna; Painter, Diane D.

    This paper presents a collaborative action research partnership model that involved participation by graduate school of education preservice students, school and university teachers, and administrators. An elementary teacher-research group investigated what would happen when fourth graders worked in teams to research and produce a multimedia…

  3. Observing system experiments with an ionospheric electrodynamics model

    NASA Astrophysics Data System (ADS)

    Durazo, J.; Kostelich, E.; Mahalov, A.; Tang, W.

    2016-04-01

    We assess the performance of an ensemble Kalman filter for data assimilation and forecasting of ion density in a model of the ionosphere given noisy observations of varying sparsity. The domain of the numerical model is a mid-latitude ionosphere between 80 and 440 km. This domain includes the D-E layers and the peak in the F layer in the ionosphere. The model simulates the time evolution of an ion density field and the coupled electrostatic potential as charge-neutral winds from gravity waves propagate up from the stratosphere. Forecasts are generated for an ensemble of initial conditions, and synthetic observations, which are generated at random locations in the model domain, are assimilated into the ensemble at time intervals corresponding to about a half-period of the gravity wave. The data assimilation scheme, called the local ensemble transform Kalman filter (LETKF), incorporates observations within a fixed radius of each grid point to compute a unique linear combination of the forecast ensembles at each grid point. The collection of updated grid points forms the updated initial conditions (analysis ensemble) for the next forecast. Even when the observation density is spatially sparse, accurate analyses of the ion density still can be obtained, but the results depend on the size of the local region used. The LETKF is robust to large levels of Gaussian noise in the observations. Our results suggest that the LETKF merits consideration as a data assimilation scheme for space weather forecasting.

  4. The South African Experience: Beyond the CIDA Model

    ERIC Educational Resources Information Center

    Bruton, John M.

    2008-01-01

    The Community and Individual Development Association (CIDA) City Campus is presented by Heaton as an innovative African alternative to traditional business education. However, he considers the model in isolation from the unique educational and economic circumstances of postapartheid South Africa. As a response, this article goes beyond the CIDA…

  5. Modelling Drug Administration Regimes for Asthma: A Romanian Experience

    ERIC Educational Resources Information Center

    Andras, Szilard; Szilagyi, Judit

    2010-01-01

    In this article, we present a modelling activity, which was a part of the project DQME II (Developing Quality in Mathematics Education, for more details see http://www.dqime.uni-dortmund.de) and some general observations regarding the maladjustments and rational errors arising in such type of activities.

  6. Social Modeling Influences on Pain Experience and Behaviour.

    ERIC Educational Resources Information Center

    Craig, Kenneth D.

    The impact of exposure to social models displaying variably tolerant pain behaviour on observers' expressions of pain is examined. Findings indicate substantial effects on verbal reports of pain, avoidance behaviour, psychophysiological indices, power function parameters, and sensory decision theory indices. Discussion centers on how social models…

  7. Constitutive Modeling of Liver Tissue: Experiment and Theory

    PubMed Central

    Gao, Zhan; Lister, Kevin; Desai, Jaydev P.

    2009-01-01

    Realistic surgical simulation requires incorporation of the mechanical properties of soft tissue in mathematical models. In actual deformation of soft-tissue during surgical intervention, the tissue is subject to tension, compression, and shear. Therefore, characterization and modeling of soft-tissue in all these three deformation modes are necessary. In this paper we applied two types of pure shear test, unconfined compression and uniaxial tension test to characterize porcine liver tissue. Digital image correlation technique was used to accurately measure the tissue deformation field. Due to gravity and its effect on the soft tissue, a maximum stretching band was observed from the relative strain field on sample undergoing tension and pure shear test. The zero strain state was identified according to the position of this maximum stretching band. Two new constitutive models based on combined exponential/logarithmic and Ogden strain energy were proposed. The models are capable to represent the observed non-linear stress–strain relation of liver tissue for full range of tension and compression and also the general response of pure shear. PMID:19806457

  8. Model-Driven Design: Systematically Building Integrated Blended Learning Experiences

    ERIC Educational Resources Information Center

    Laster, Stephen

    2010-01-01

    Developing and delivering curricula that are integrated and that use blended learning techniques requires a highly orchestrated design. While institutions have demonstrated the ability to design complex curricula on an ad-hoc basis, these projects are generally successful at a great human and capital cost. Model-driven design provides a…

  9. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    NASA Astrophysics Data System (ADS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-07-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.

  10. Peer Experiences and Social Self-Perceptions: A Sequential Model.

    ERIC Educational Resources Information Center

    Boivin, Michel; Hymel, Shelley

    1997-01-01

    Evaluated a social process model describing how aggression and withdrawal lead to negative social self-perception. Subjects were 793 French Canadian elementary school children. Found that withdrawal behavior uniquely predicted social self-perceptions. Both negative peer status and peer victimization successively mediated the impact of social…

  11. Early Childhood Educators' Experience of an Alternative Physical Education Model

    ERIC Educational Resources Information Center

    Tsangaridou, Niki; Genethliou, Nicholas

    2016-01-01

    Alternative instructional and curricular models are regarded as more comprehensive and suitable approaches to providing quality physical education (Kulinna 2008; Lund and Tannehill 2010; McKenzie and Kahan 2008; Metzler 2011; Quay and Peters 2008). The purpose of this study was to describe the impact of the Early Steps Physical Education…

  12. Integrated modeling applications for tokamak experiments with OMFIT

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Smith, S. P.; Lao, L. L.; Izacard, O.; Ren, Q.; Park, J. M.; Candy, J.; Wang, Z.; Luna, C. J.; Izzo, V. A.; Grierson, B. A.; Snyder, P. B.; Holland, C.; Penna, J.; Lu, G.; Raum, P.; McCubbin, A.; Orlov, D. M.; Belli, E. A.; Ferraro, N. M.; Prater, R.; Osborne, T. H.; Turnbull, A. D.; Staebler, G. M.

    2015-08-01

    One modeling framework for integrated tasks (OMFIT) is a comprehensive integrated modeling framework which has been developed to enable physics codes to interact in complicated workflows, and support scientists at all stages of the modeling cycle. The OMFIT development follows a unique bottom-up approach, where the framework design and capabilities organically evolve to support progressive integration of the components that are required to accomplish physics goals of increasing complexity. OMFIT provides a workflow for easily generating full kinetic equilibrium reconstructions that are constrained by magnetic and motional Stark effect measurements, and kinetic profile information that includes fast-ion pressure modeled by a transport code. It was found that magnetic measurements can be used to quantify the amount of anomalous fast-ion diffusion that is present in DIII-D discharges, and provide an estimate that is consistent with what would be needed for transport simulations to match the measured neutron rates. OMFIT was used to streamline edge-stability analyses, and evaluate the effect of resonant magnetic perturbation (RMP) on the pedestal stability, which have been found to be consistent with the experimental observations. The development of a five-dimensional numerical fluid model for estimating the effects of the interaction between magnetohydrodynamic (MHD) and microturbulence, and its systematic verification against analytic models was also supported by the framework. OMFIT was used for optimizing an innovative high-harmonic fast wave system proposed for DIII-D. For a parallel refractive index {{n}\\parallel}>3 , the conditions for strong electron-Landau damping were found to be independent of launched {{n}\\parallel} and poloidal angle. OMFIT has been the platform of choice for developing a neural-network based approach to efficiently perform a non-linear multivariate regression of local transport fluxes as a function of local dimensionless parameters

  13. Critical Issues in Maintaining Fresh and Fresh-cut Produce Safety

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An increasing number of food-borne illness outbreaks have been associated with the consumption of fresh and fresh-cut produce contaminated with human pathogens. Produce grows in the natural environment and undergoes much handling on its journey from farm to table, making it vulnerable to human path...

  14. What is a fresh scent in perfumery? Perceptual freshness is correlated with substantivity.

    PubMed

    Zarzo, Manuel

    2013-01-01

    Perfumes are manufactured by mixing odorous materials with different volatilities. The parameter that measures the lasting property of a material when applied on the skin is called substantivity or tenacity. It is well known by perfumers that citrus and green notes are perceived as fresh and they tend to evaporate quickly, while odors most dissimilar to 'fresh' (e.g., oriental, powdery, erogenic and animalic scents) are tenacious. However, studies aimed at quantifying the relationship between fresh odor quality and substantivity have not received much attention. In this work, perceptual olfactory ratings on a fresh scale, estimated in a previous study, were compared with substantivity parameters and antierogenic ratings from the literature. It was found that the correlation between fresh odor character and odorant substantivity is quite strong (r = -0.85). 'Fresh' is sometimes interpreted in perfumery as 'cool' and the opposite of 'warm'. This association suggests that odor freshness might be somehow related to temperature. Assuming that odor perception space was shaped throughout evolution in temperate climates, results reported here are consistent with the hypothesis that 'fresh' evokes scents typically encountered in the cool season, while 'warm' would be evoked by odors found in nature during summer. This hypothesis is rather simplistic but it may provide a new insight to better understand the perceptual space of scents.

  15. Consumer's Fresh Produce Food Safety Practices: Outcomes of a Fresh Produce Safety Education Program

    ERIC Educational Resources Information Center

    Scott, Amanda R.; Pope, Paul E.; Thompson, Britta M.

    2009-01-01

    The Centers for Disease Control and Prevention estimate that there are 76 million cases of foodborne disease annually. Foodborne disease is usually associated with beef, poultry, and seafood. However, there is an increasing number of foodborne disease cases related to fresh produce. Consumers may not associate fresh produce with foodborne disease…

  16. Decontamination of fresh and fresh-cut fruits and vegetables with cold plasma technology

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Contamination of fresh and fresh-cut fruits and vegetables by foodborne pathogens has prompted research into novel interventions. Cold plasma is a nonthermal food processing technology which uses energetic, reactive gases to inactivate contaminating microbes. This flexible sanitizing method uses ele...

  17. A fresh fruit and vegetable program improves high school students' consumption of fresh produce

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Low fruit and vegetable intake may be associated with overweight. The United States Department of Agriculture implemented the Fresh Fruit and Vegetable Program in 2006-2007. One Houston-area high school was selected and received funding to provide baskets of fresh fruits and vegetables daily for eac...

  18. Little Earth Experiment: An instrument to model planetary cores.

    PubMed

    Aujogue, Kélig; Pothérat, Alban; Bates, Ian; Debray, François; Sreenivasan, Binod

    2016-08-01

    In this paper, we present a new experimental facility, Little Earth Experiment, designed to study the hydrodynamics of liquid planetary cores. The main novelty of this apparatus is that a transparent electrically conducting electrolyte is subject to extremely high magnetic fields (up to 10 T) to produce electromagnetic effects comparable to those produced by moderate magnetic fields in planetary cores. This technique makes it possible to visualise for the first time the coupling between the principal forces in a convection-driven dynamo by means of Particle Image Velocimetry (PIV) in a geometry relevant to planets. We first present the technology that enables us to generate these forces and implement PIV in a high magnetic field environment. We then show that the magnetic field drastically changes the structure of convective plumes in a configuration relevant to the tangent cylinder region of the Earth's core. PMID:27587138

  19. Little Earth Experiment: An instrument to model planetary cores

    NASA Astrophysics Data System (ADS)

    Aujogue, Kélig; Pothérat, Alban; Bates, Ian; Debray, François; Sreenivasan, Binod

    2016-08-01

    In this paper, we present a new experimental facility, Little Earth Experiment, designed to study the hydrodynamics of liquid planetary cores. The main novelty of this apparatus is that a transparent electrically conducting electrolyte is subject to extremely high magnetic fields (up to 10 T) to produce electromagnetic effects comparable to those produced by moderate magnetic fields in planetary cores. This technique makes it possible to visualise for the first time the coupling between the principal forces in a convection-driven dynamo by means of Particle Image Velocimetry (PIV) in a geometry relevant to planets. We first present the technology that enables us to generate these forces and implement PIV in a high magnetic field environment. We then show that the magnetic field drastically changes the structure of convective plumes in a configuration relevant to the tangent cylinder region of the Earth's core.

  20. A portable device for rapid nondestructive detection of fresh meat quality

    NASA Astrophysics Data System (ADS)

    Lin, Wan; Peng, Yankun

    2014-05-01

    Quality attributes of fresh meat influence nutritional value and consumers' purchasing power. In order to meet the demand of inspection department for portable device, a rapid and nondestructive detection device for fresh meat quality based on ARM (Advanced RISC Machines) processor and VIS/NIR technology was designed. Working principal, hardware composition, software system and functional test were introduced. Hardware system consisted of ARM processing unit, light source unit, detection probe unit, spectral data acquisition unit, LCD (Liquid Crystal Display) touch screen display unit, power unit and the cooling unit. Linux operating system and quality parameters acquisition processing application were designed. This system has realized collecting spectral signal, storing, displaying and processing as integration with the weight of 3.5 kg. 40 pieces of beef were used in experiment to validate the stability and reliability. The results indicated that prediction model developed using PLSR method using SNV as pre-processing method had good performance, with the correlation coefficient of 0.90 and root mean square error of 1.56 for validation set for L*, 0.95 and 1.74 for a*,0.94 and 0.59 for b*, 0.88 and 0.13 for pH, 0.79 and 12.46 for tenderness, 0.89 and 0.91 for water content, respectively. The experimental result shows that this device can be a useful tool for detecting quality of meat.

  1. Experience with mixed MPI/threaded programming models

    SciTech Connect

    May, J M; Supinski, B R

    1999-04-01

    A shared memory cluster is a parallel computer that consists of multiple nodes connected through an interconnection network. Each node is a symmetric multiprocessor (SMP) unit in which multiple CPUs share uniform access to a pool of main memory. The SGI Origin 2000, Compaq (formerly DEC) AlphaServer Cluster, and recent IBM RS6000/SP systems are all variants of this architecture. The SGI Origin 2000 has hardware that allows tasks running on any processor to access any main memory location in the system, so all the memory in the nodes forms a single shared address space. This is called a nonuniform memory access (NUMA) architecture because it gives programs a single shared address space, but the access time to different memory locations varies. In the IBM and Compaq systems, each node's memory forms a separate address space, and tasks communicate between nodes by passing messages or using other explicit mechanisms. Many large parallel codes use standard MPI calls to exchange data between tasks in a parallel job, and this is a natural programming model for distributed memory architectures. On a shared memory architecture, message passing is unnecessary if the code is written to use multithreading: threads run in parallel on different processors, and they exchange data simply by reading and writing shared memory locations. Shared memory clusters combine architectural elements of both distributed memory and shared memory systems, and they support both message passing and multithreaded programming models. Application developers are now trying to determine which programming model is best for these machines. This paper presents initial results of a study aimed at answering that question. We interviewed developers representing nine scientific code groups at Lawrence Livermore National Laboratory (LLNL). All of these groups are attempting to optimize their codes to run on shared memory clusters, specifically the IBM and DEC platforms at LLNL. This paper will focus on ease

  2. Three-Dimensional Numerical Modeling of Magnetohydrodynamic Augmented Propulsion Experiment

    NASA Technical Reports Server (NTRS)

    Turner, M. W.; Hawk, C. W.; Litchford, R. J.

    2009-01-01

    Over the past several years, NASA Marshall Space Flight Center has engaged in the design and development of an experimental research facility to investigate the use of diagonalized crossed-field magnetohydrodynamic (MHD) accelerators as a possible thrust augmentation device for thermal propulsion systems. In support of this effort, a three-dimensional numerical MHD model has been developed for the purpose of analyzing and optimizing accelerator performance and to aid in understanding critical underlying physical processes and nonideal effects. This Technical Memorandum fully summarizes model development efforts and presents the results of pretest performance optimization analyses. These results indicate that the MHD accelerator should utilize a 45deg diagonalization angle with the applied current evenly distributed over the first five inlet electrode pairs. When powered at 100 A, this configuration is expected to yield a 50% global efficiency with an 80% increase in axial velocity and a 50% increase in centerline total pressure.

  3. Model experiments showing simultaneous development of folds and transcurrent faults

    NASA Astrophysics Data System (ADS)

    Dubey, Ashok Kumar

    1980-05-01

    Simultaneous development of noncylindrical folds and transcurrent fractures has been studied using model techniques. A plasticine model was compressed in one direction and an initial formation of folds was followed by the initiation of conjugate sets of transcurrent fractures. It was recorded that with progressive deformation the length of each fracture and the displacement along it increase steadily and the rate of displacement varies at different stages of deformation. Individual fold geometries vary along their hinge lines and these geometrical variations appear to be due to interference of folds with the transcurrent fractures. These interference effects also change the amount of rotation of fractures. Fold structures are different on either side of the fault plane. A natural example from the Bude area, England, shows similar geometrical features. The method of determining fault displacement by comparing the positions of fold hinge lines on either side of a fault is discussed in the light of the above results.

  4. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  5. Experiments with a universe for molecular modelling of biological processes.

    PubMed

    Jedruch, W T; Barski, M

    1990-01-01

    A computer simulation program and results of preliminary simulations of an abstract two-dimensional universe are presented, in which biological and physical processes can be modelled at the molecular level. Two types of permanent elements (atoms) occupy squares of the universe: called 0 and 1. Atoms sharing a common square form a particle, with properties determined by its component atoms. Atoms, particles, and complexes of particles move and collide according to rules like those of classical mechanics. At a higher level of organization, the string of atoms in a particle is viewed as a program, whose execution can affect the space around the particle. The computer program (written in Turbo-Pascal language) can simulate the evolution of the universe starting from any given initial configuration of the particles. Three examples of simulations, showing the development of ordered spatial structures from initial sets of randomly distributed particles, illustrate the universe's potential in modelling various molecular processes.

  6. Integrated modeling of cryogenic layered highfoot experiments at the NIF

    NASA Astrophysics Data System (ADS)

    Kritcher, A. L.; Hinkel, D. E.; Callahan, D. A.; Hurricane, O. A.; Clark, D.; Casey, D. T.; Dewald, E. L.; Dittrich, T. R.; Döppner, T.; Barrios Garcia, M. A.; Haan, S.; Berzak Hopkins, L. F.; Jones, O.; Landen, O.; Ma, T.; Meezan, N.; Milovich, J. L.; Pak, A. E.; Park, H.-S.; Patel, P. K.; Ralph, J.; Robey, H. F.; Salmonson, J. D.; Sepke, S.; Spears, B.; Springer, P. T.; Thomas, C. A.; Town, R.; Celliers, P. M.; Edwards, M. J.

    2016-05-01

    Integrated radiation hydrodynamic modeling in two dimensions, including the hohlraum and capsule, of layered cryogenic HighFoot Deuterium-Tritium (DT) implosions on the NIF successfully predicts important data trends. The model consists of a semi-empirical fit to low mode asymmetries and radiation drive multipliers to match shock trajectories, one dimensional inflight radiography, and time of peak neutron production. Application of the model across the HighFoot shot series, over a range of powers, laser energies, laser wavelengths, and target thicknesses predicts the neutron yield to within a factor of two for most shots. The Deuterium-Deuterium ion temperatures and the DT down scattered ratios, ratio of (10-12)/(13-15) MeV neutrons, roughly agree with data at peak fuel velocities <340 km/s and deviate at higher peak velocities, potentially due to flows and neutron scattering differences stemming from 3D or capsule support tent effects. These calculations show a significant amount alpha heating, 1-2.5× for shots where the experimental yield is within a factor of two, which has been achieved by increasing the fuel kinetic energy. This level of alpha heating is consistent with a dynamic hot spot model that is matched to experimental data and as determined from scaling of the yield with peak fuel velocity. These calculations also show that low mode asymmetries become more important as the fuel velocity is increased, and that improving these low mode asymmetries can result in an increase in the yield by a factor of several.

  7. Flooding Experiments and Modeling for Improved Reactor Safety

    SciTech Connect

    Solmos, M., Hogan, K.J., VIerow, K.

    2008-09-14

    Countercurrent two-phase flow and “flooding” phenomena in light water reactor systems are being investigated experimentally and analytically to improve reactor safety of current and future reactors. The aspects that will be better clarified are the effects of condensation and tube inclination on flooding in large diameter tubes. The current project aims to improve the level of understanding of flooding mechanisms and to develop an analysis model for more accurate evaluations of flooding in the pressurizer surge line of a Pressurized Water Reactor (PWR). Interest in flooding has recently increased because Countercurrent Flow Limitation (CCFL) in the AP600 pressurizer surge line can affect the vessel refill rate following a small break LOCA and because analysis of hypothetical severe accidents with the current flooding models in reactor safety codes shows that these models represent the largest uncertainty in analysis of steam generator tube creep rupture. During a hypothetical station blackout without auxiliary feedwater recovery, should the hot leg become voided, the pressurizer liquid will drain to the hot leg and flooding may occur in the surge line. The flooding model heavily influences the pressurizer emptying rate and the potential for surge line structural failure due to overheating and creep rupture. The air-water test results in vertical tubes are presented in this paper along with a semi-empirical correlation for the onset of flooding. The unique aspects of the study include careful experimentation on large-diameter tubes and an integrated program in which air-water testing provides benchmark knowledge and visualization data from which to conduct steam-water testing.

  8. Flow interaction experiment. Volume 1: Aerothermal modeling, phase 2

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Sullivan, J. P.; Murthy, S. N. B.

    1993-01-01

    An experimental and computational study is reported for the flow of a turbulent jet discharging into a rectangular enclosure. The experimental configurations consisting of primary jets only, annular jets only, and a combination of annular and primary jets are investigated to provide a better understanding of the flow field in an annular combustor. A laser Doppler velocimeter is used to measure mean velocity and Reynolds stress components. Major features of the flow field include recirculation, primary and annular jet interaction, and high turbulence. A significant result from this study is the effect the primary jets have on the flow field. The primary jets are seen to create statistically larger recirculation zones and higher turbulence levels. In addition, a technique called marker nephelometry is used to provide mean concentration values in the model combustor. Computations are performed using three levels of turbulence closures, namely k-epsilon model, algebraic second moment (ASM), and differential second moment (DSM) closure. Two different numerical schemes are applied. One is the lower-order power-law differencing scheme (PLDS) and the other is the higher-order flux-spline differencing scheme (FSDS). A comparison is made of the performance of these schemes. The numerical results are compared with experimental data. For the cases considered in this study, the FSDS is more accurate than the PLDS. For a prescribed accuracy, the flux-spline scheme requires a far fewer number of grid points. Thus, it has the potential for providing a numerical error-free solution, especially for three-dimensional flows, without requiring an excessively fine grid. Although qualitatively good comparison with data was obtained, the deficiencies regarding the modeled dissipation rate (epsilon) equation, pressure-strain correlation model, and the inlet epsilon profile and other critical closure issues need to be resolved before one can achieve the degree of accuracy required to

  9. Flow interaction experiment. Volume 2: Aerothermal modeling, phase 2

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Sullivan, J. P.; Murthy, S. N. B.

    1993-01-01

    An experimental and computational study is reported for the flow of a turbulent jet discharging into a rectangular enclosure. The experimental configurations consisting of primary jets only, annular jets only, and a combination of annular and primary jets are investigated to provide a better understanding of the flow field in an annular combustor. A laser Doppler velocimeter is used to measure mean velocity and Reynolds stress components. Major features of the flow field include recirculation, primary and annular jet interaction, and high turbulence. A significant result from this study is the effect the primary jets have on the flow field. The primary jets are seen to create statistically larger recirculation zones and higher turbulence levels. In addition, a technique called marker nephelometry is used to provide mean concentration values in the model combustor. Computations are performed using three levels of turbulence closures, namely k-epsilon model, algebraic second moment (ASM), and differential second moment (DSM) closure. Two different numerical schemes are applied. One is the lower-order power-law differencing scheme (PLDS) and the other is the higher-order flux-spline differencing scheme (FSDS). A comparison is made of the performance of these schemes. The numerical results are compared with experimental data. For the cases considered in this study, the FSDS is more accurate than the PLDS. For a prescribed accuracy, the flux-spline scheme requires a far fewer number of grid points. Thus, it has the potential for providing a numerical error-free solution, especially for three-dimensional flows, without requiring an excessively fine grid. Although qualitatively good comparison with data was obtained, the deficiencies regarding the modeled dissipation rate (epsilon) equation, pressure-strain correlation model, and the inlet epsilon profile and other critical closure issues need to be resolved before one can achieve the degree of accuracy required to

  10. Cryogenic modelling of the ISOCAM experiment from test results

    NASA Astrophysics Data System (ADS)

    de Sa, L.; Collaudin, B.

    An overview is presented of cryogenic test results and fundamental measurements on the ISOCAM, an infrared camera to be mounted aboard the ISO (Infrared Space Observatory) satellite. Thermal conductances and heat distribution in the rotor and stator of a cryogenic stepper motor are evaluated by model fitting. Phenomena such as 'chopped' function of motors and two-time temperature rise after continuous motor functioning are studied.

  11. Modeling of Nova indirect drive Rayleigh--Taylor experiments

    SciTech Connect

    Weber, S.V.; Remington, B.A.; Haan, S.W.; Wilson, B.G.; Nash, J.K. )

    1994-11-01

    The growth due to the Rayleigh--Taylor (RT) instability of single-wavelength surface perturbations on planar foils of brominated CH [CH(Br)] and fluorosilicone (FS) was measured. The foils were accelerated by x-ray ablation with temporally shaped drive pulses. A range of initial amplitudes ([ital a][sub 0]) and wavelengths ([lambda]) have been used. This paper focuses upon foils with small [ital a][sub 0]/[lambda], which exhibit substantial growth in the linear regime, and are most sensitive to the calculated growth rate. The CH(Br) foils exhibit slower RT perturbation growth because opacity differences result in a larger ablation velocity and a longer density scale length than for FS. Tabulated opacities from detailed atomic models, OPAL [Astrophys. J. [bold 397], 717 (1992)] and super transition array (STA) [Phys. Rev. A [bold 40], 3183 (1989)] were employed. Unlike previous simulations which employed the average atom (XSN) opacity treatment, parameter adjustments to fit experimental data no longer appear necessary. Nonlocal thermodynamic equilibrium (NLTE) effects do not appear to be important. Other variables which may affect the modeling, such as changes of the equation of state and radiation drive spectrum, were also examined. The current calculational model, which incorporates physically justified choices for these calculational ingredients, agrees with the Nova single wavelength RT perturbation growth data.

  12. Model Deformation Measurement Technique NASA Langley HSR Experiences

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Wahls, R. A.; Owens, L. R.; Goad, W. K.

    1999-01-01

    Model deformation measurement techniques have been investigated and developed at NASA's Langley Research Center. The current technique is based upon a single video camera photogrammetric determination of two dimensional coordinates of wing targets with a fixed (and known) third dimensional coordinate, namely the spanwise location. Variations of this technique have been used to measure wing twist and bending at a few selected spanwise locations near the wing tip on HSR models at the National Transonic Facility, the Transonic Dynamics Tunnel, and the Unitary Plan Wind Tunnel. Automated measurements have been made at both the Transonic Dynamics Tunnel and at Unitary Plan Wind Tunnel during the past year. Automated measurements were made for the first time at the NTF during the recently completed HSR Reference H Test 78 in early 1996. A major problem in automation for the NTF has been the need for high contrast targets which do not exceed the stringent surface finish requirements. The advantages and limitations (including targeting) of the technique as well as the rationale for selection of this particular technique are discussed. Wing twist examples from the HSR Reference H model are presented to illustrate the run-to-run and test-to-test repeatability of the technique in air mode at the NTF. Examples of wing twist in cryogenic nitrogen mode at the NTF are also presented.

  13. A model for successful research partnerships: a New Brunswick experience.

    PubMed

    Tamlyn, Karen; Creelman, Helen; Fisher, Garfield

    2002-01-01

    The purpose of this paper is to present an overview of a partnership model used to conduct a research study entitled "Needs of patients with cancer and their family members in New Brunswick Health Region 3 (NBHR3)" (Tamlyn-Leaman, Creelman, & Fisher, 1997). This partial replication study carried out by the three authors between 1995 and 1997 was a needs assessment, adapted with permission from previous work by Fitch, Vachon, Greenberg, Saltmarche, and Franssen (1993). In order to conduct a comprehensive needs assessment with limited resources, a partnership between academic, public, and private sectors was established. An illustration of this partnership is presented in the model entitled "A Client-Centred Partnership Model." The operations of this partnership, including the strengths, the perceived benefits, lessons learned by each partner, the barriers, and the process for conflict resolution, are described. A summary of the cancer care initiatives undertaken by NBHR3, which were influenced directly or indirectly by the recommendations from this study, is included. PMID:12271916

  14. Mathematical Modeling of Eukaryotic Cell Migration: Insights Beyond Experiments

    PubMed Central

    Danuser, Gaudenz; Allard, Jun; Mogilner, Alex

    2014-01-01

    A migrating cell is a molecular machine made of tens of thousands of short-lived and interacting parts. Understanding migration means understanding the self-organization of these parts into a system of functional units. This task is one of tackling complexity: First, the system integrates numerous chemical and mechanical component processes. Second, these processes are connected in feedback interactions and over a large range of spatial and temporal scales. Third, many processes are stochastic, which leads to heterogeneous migration behaviors. Early on in the research of cell migration it became evident that this complexity exceeds human intuition. Thus, the cell migration community has led the charge to build mathematical models that could integrate the diverse experimental observations and measurements in consistent frameworks, first in conceptual and more recently in molecularly explicit models. The main goal of this review is to sift through a series of important conceptual and explicit mathematical models of cell migration and to evaluate their contribution to the field in their ability to integrate critical experimental data. PMID:23909278

  15. Anisotropic magnetoresistivity in structured elastomer composites: modelling and experiments.

    PubMed

    Mietta, José Luis; Tamborenea, Pablo I; Martin Negri, R

    2016-08-14

    A constitutive model for the anisotropic magnetoresistivity in structured elastomer composites (SECs) is proposed. The SECs considered here are oriented pseudo-chains of conductive-magnetic inorganic materials inside an elastomer organic matrix. The pseudo-chains are formed by fillers which are simultaneously conductive and magnetic dispersed in the polymer before curing or solvent evaporation. The SEC is then prepared in the presence of a uniform magnetic field, referred to as Hcuring. This procedure generates the pseudo-chains, which are preferentially aligned in the direction of Hcuring. Electrical conduction is present in that direction only. The constitutive model for the magnetoresistance considers the magnetic pressure, Pmag, induced on the pseudo-chains by an external magnetic field, H, applied in the direction of the pseudo-chains. The relative changes in conductivity as a function of H are calculated by evaluating the relative increase of the electron tunnelling probability with Pmag, a magneto-elastic coupling which produces an increase of conductivity with magnetization. The model is used to adjust experimental results of magnetoresistance in a specific SEC where the polymer is polydimethylsiloxane, PDMS, and fillers are microparticles of magnetite-silver (referred to as Fe3O4[Ag]). Simulations of the expected response for other materials in both superparamagnetic and blocked magnetic states are presented, showing the influence of the Young's modulus of the matrix and filler's saturation magnetization. PMID:27418417

  16. Numerical experiments with model monophyletic and paraphyletic taxa

    NASA Technical Reports Server (NTRS)

    Sepkoski, J. J. Jr; Kendrick, D. C.; Sepkoski JJ, J. r. (Principal Investigator)

    1993-01-01

    The problem of how accurately paraphyletic taxa versus monophyletic (i.e., holophyletic) groups (clades) capture underlying species patterns of diversity and extinction is explored with Monte Carlo simulations. Phylogenies are modeled as stochastic trees. Paraphyletic taxa are defined in an arbitrary manner by randomly choosing progenitors and clustering all descendants not belonging to other taxa. These taxa are then examined to determine which are clades, and the remaining paraphyletic groups are dissected to discover monophyletic subgroups. Comparisons of diversity patterns and extinction rates between modeled taxa and lineages indicate that paraphyletic groups can adequately capture lineage information under a variety of conditions of diversification and mass extinction. This suggests that these groups constitute more than mere "taxonomic noise" in this context. But, strictly monophyletic groups perform somewhat better, especially with regard to mass extinctions. However, when low levels of paleontologic sampling are simulated, the veracity of clades deteriorates, especially with respect to diversity, and modeled paraphyletic taxa often capture more information about underlying lineages. Thus, for studies of diversity and taxic evolution in the fossil record, traditional paleontologic genera and families need not be rejected in favor of cladistically-defined taxa.

  17. Comparison Between Keyhole Weld Model and Laser Welding Experiments

    SciTech Connect

    Wood, B C; Palmer, T A; Elmer, J W

    2002-09-23

    A series of laser welds were performed using a high-power diode-pumped continuous-wave Nd:YAG laser welder. In a previous study, the experimental results of those welds were examined, and the effects that changes in incident power and various welding parameters had on weld geometry were investigated. In this report, the fusion zones of the laser welds are compared with those predicted from a laser keyhole weld simulation model for stainless steels (304L and 21-6-9), vanadium, and tantalum. The calculated keyhole depths for the vanadium and 304L stainless steel samples fit the experimental data to within acceptable error, demonstrating the predictive power of numerical simulation for welds in these two materials. Calculations for the tantalum and 21-6-9 stainless steel were a poorer match to the experimental values. Accuracy in materials properties proved extremely important in predicting weld behavior, as minor changes in certain properties had a significant effect on calculated keyhole depth. For each of the materials tested, the correlation between simulated and experimental keyhole depths deviated as the laser power was increased. Using the model as a simulation tool, we conclude that the optical absorptivity of the material is the most influential factor in determining the keyhole depth. Future work will be performed to further investigate these effects and to develop a better match between the model and the experimental results for 21-6-9 stainless steel and tantalum.

  18. Grip Forces During Object Manipulation: Experiment, Mathematical Model & Validation

    PubMed Central

    Slota, Gregory P.; Latash, Mark L.; Zatsiorsky, Vladimir M.

    2011-01-01

    When people transport handheld objects, they change the grip force with the object movement. Circular movement patterns were tested within three planes at two different rates (1.0, 1.5 Hz), and two diameters (20, 40 cm). Subjects performed the task reasonably well, matching frequencies and dynamic ranges of accelerations within expectations. A mathematical model was designed to predict the applied normal forces from kinematic data. The model is based on two hypotheses: (a) the grip force changes during movements along complex trajectories can be represented as the sum of effects of two basic commands associated with the parallel and orthogonal manipulation, respectively; (b) different central commands are sent to the thumb and virtual finger (Vf- four fingers combined). The model predicted the actual normal forces with a total variance accounted for of better than 98%. The effects of the two components of acceleration—along the normal axis and the resultant acceleration within the shear plane—on the digit normal forces are additive. PMID:21735245

  19. Grip forces during object manipulation: experiment, mathematical model, and validation.

    PubMed

    Slota, Gregory P; Latash, Mark L; Zatsiorsky, Vladimir M

    2011-08-01

    When people transport handheld objects, they change the grip force with the object movement. Circular movement patterns were tested within three planes at two different rates (1.0, 1.5 Hz) and two diameters (20, 40 cm). Subjects performed the task reasonably well, matching frequencies and dynamic ranges of accelerations within expectations. A mathematical model was designed to predict the applied normal forces from kinematic data. The model is based on two hypotheses: (a) the grip force changes during movements along complex trajectories can be represented as the sum of effects of two basic commands associated with the parallel and orthogonal manipulation, respectively; (b) different central commands are sent to the thumb and virtual finger (Vf-four fingers combined). The model predicted the actual normal forces with a total variance accounted for of better than 98%. The effects of the two components of acceleration-along the normal axis and the resultant acceleration within the shear plane-on the digit normal forces are additive. PMID:21735245

  20. Using the Bifocal Modeling Framework to Resolve "Discrepant Events" between Physical Experiments and Virtual Models in Biology

    ERIC Educational Resources Information Center

    Blikstein, Paulo; Fuhrmann, Tamar; Salehi, Shima

    2016-01-01

    In this paper, we investigate an approach to supporting students' learning in science through a combination of physical experimentation and virtual modeling. We present a study that utilizes a scientific inquiry framework, which we call "bifocal modeling," to link student-designed experiments and computer models in real time. In this…

  1. The human operator in manual preview tracking /an experiment and its modeling via optimal control/

    NASA Technical Reports Server (NTRS)

    Tomizuka, M.; Whitney, D. E.

    1976-01-01

    A manual preview tracking experiment and its results are presented. The preview drastically improves the tracking performance compared to zero-preview tracking. Optimal discrete finite preview control is applied to determine the structure of a mathematical model of the manual preview tracking experiment. Variable parameters in the model are adjusted to values which are consistent to the published data in manual control. The model with the adjusted parameters is found to be well correlated to the experimental results.

  2. Modified atmosphere packaging for fresh-cut fruits and vegetables

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The latest development in and different aspects of modified atmosphere packaging for fresh-cut fruits and vegetables are reviewed in the book. This book provides all readers, including fresh-cut academic researchers, fresh-cut R&D personnel, and fresh-cut processing engineers, with unique, essential...

  3. Inheritance of fresh-cut fruit quality attributes in Capsicum

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The fresh-cut fruit and vegetable industry has expanded rapidly during the past decade, due to freshness, convenience and the high nutrition that fresh-cut produce offers to consumers. The current report evaluates the inheritance of postharvest attributes that contribute to pepper fresh-cut product...

  4. 9 CFR 319.141 - Fresh pork sausage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Fresh pork sausage. 319.141 Section... Sausage § 319.141 Fresh pork sausage. “Fresh Pork Sausage” is sausage prepared with fresh pork or frozen pork or both, but not including pork byproducts, and may contain Mechanically Separated (Species)...

  5. 9 CFR 319.141 - Fresh pork sausage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Fresh pork sausage. 319.141 Section... Sausage § 319.141 Fresh pork sausage. “Fresh Pork Sausage” is sausage prepared with fresh pork or frozen pork or both, but not including pork byproducts, and may contain Mechanically Separated (Species)...

  6. 9 CFR 319.141 - Fresh pork sausage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Fresh pork sausage. 319.141 Section... Sausage § 319.141 Fresh pork sausage. “Fresh Pork Sausage” is sausage prepared with fresh pork or frozen pork or both, but not including pork byproducts, and may contain Mechanically Separated (Species)...

  7. MEMO: multi-experiment mixture model analysis of censored data

    PubMed Central

    Geissen, Eva-Maria; Hasenauer, Jan; Heinrich, Stephanie; Hauf, Silke; Theis, Fabian J.; Radde, Nicole E.

    2016-01-01

    Motivation: The statistical analysis of single-cell data is a challenge in cell biological studies. Tailored statistical models and computational methods are required to resolve the subpopulation structure, i.e. to correctly identify and characterize subpopulations. These approaches also support the unraveling of sources of cell-to-cell variability. Finite mixture models have shown promise, but the available approaches are ill suited to the simultaneous consideration of data from multiple experimental conditions and to censored data. The prevalence and relevance of single-cell data and the lack of suitable computational analytics make automated methods, that are able to deal with the requirements posed by these data, necessary. Results: We present MEMO, a flexible mixture modeling framework that enables the simultaneous, automated analysis of censored and uncensored data acquired under multiple experimental conditions. MEMO is based on maximum-likelihood inference and allows for testing competing hypotheses. MEMO can be applied to a variety of different single-cell data types. We demonstrate the advantages of MEMO by analyzing right and interval censored single-cell microscopy data. Our results show that an examination of censoring and the simultaneous consideration of different experimental conditions are necessary to reveal biologically meaningful subpopulation structures. MEMO allows for a stringent analysis of single-cell data and enables researchers to avoid misinterpretation of censored data. Therefore, MEMO is a valuable asset for all fields that infer the characteristics of populations by looking at single individuals such as cell biology and medicine. Availability and Implementation: MEMO is implemented in MATLAB and freely available via github (https://github.com/MEMO-toolbox/MEMO). Contacts: eva-maria.geissen@ist.uni-stuttgart.de or nicole.radde@ist.uni-stuttgart.de Supplementary information: Supplementary data are available at Bioinformatics online

  8. Holographic measurements of fresh dry bone elasticity

    NASA Astrophysics Data System (ADS)

    Silvennoinen, Raimo; Nygren, Kaarlo; Karna, Markku; Karna, Kari

    1992-08-01

    To compare the elasticity of bones covered with soft tissue and the elasticity of defleshed and dried bones, we used sampling screws to make the surface movements of the bones visible through the soft tissue. We compared fresh and dry European moose skulls before and after skinning. External forces were focused on the skull bones through the pedicles. A high correlation in fringe orientation was observed in the case of thick bone structures with rigid interdigited sutures. We also compared compression dynamics of fresh and dry moose antler cubes.

  9. Obliquity Experiments with a Mars General Circulation Model

    NASA Technical Reports Server (NTRS)

    Harberle, R. M.; Schaeffer, J.; Cuzzi, Jeffery N. (Technical Monitor)

    1995-01-01

    We have simulated the seasonal variation of the general circulation on Mars for obliquities of 0deg and 60deg. These obliquities represent the minimum and maximum values the planet has experienced during the past 10(exp 7) years (e.g., Laskar and Robutel, 1993, Nature, 361, 608-614). The model we use is the NASA/Ames Mars General Circulation Model (Pollack et al., 1993, J. Geophys. Res. 98, 3149-3181). We vary only the obliquity; all other model parameters are as in Pollack et al. At high obliquity, the model shows dramatic seasonal variations in the polar caps and in the structure and intensity of the circulation. At the solstices the winter cap extends to the equator. Thus, surface temperatures throughout the entire winter hemisphere are fixed at the CO2 frost point. During summer surface temperatures at the poles reach 269K in the north and 295K in the south. The most notable changes to the circulation at solstice compared to our standard runs are a general weakening of the winter westerlies, a Hadley cell of greater latitudinal extent, and the development of very strong, possibly unstable, low-level jets in midlatitudes of the summer hemisphere. Surface stresses associated with these jets are sufficient to raise dust continuously. Thus, dust storms should be frequent features of the high obliquity climate. This result is independent of any desorbed regolith CO2 which would raise mean surface pressures. At zero obliquity the structure of the circulation resembles that of present day equinox conditions modulated by the varying insolation associated with orbital eccentricity. Notable features include equatorial superrotation, asymmetric Hadley cells, and stronger poleward heat fluxes in the northern hemisphere. Since the poles do not receive solar energy at any time of year, permanent caps form which extend to about 70deg in each hemisphere. However, the north permanent cap is growing at a rate 40% faster than the south cap. This is due to the differences in

  10. Shear mechanical properties of the spleen: experiment and analytical modelling.

    PubMed

    Nicolle, S; Noguer, L; Palierne, J-F

    2012-05-01

    This paper aims at providing the first shear mechanical properties of spleen tissue. Rheometric tests on porcine splenic tissues were performed in the linear and nonlinear regime, revealing a weak frequency dependence of the dynamic moduli in linear regime and a distinct strain-hardening effect in nonlinear regime. These behaviours are typical of soft tissues such as kidney and liver, with however a less pronounced strain-hardening for the spleen. An analytical model based on power laws is then proposed to describe the general shear viscoelastic behaviour of the spleen. PMID:22498291

  11. Numerical modeling of oxygen exclusion experiments of anaerobic bioventing

    NASA Astrophysics Data System (ADS)

    Mihopoulos, Philip G.; Suidan, Makram T.; Sayles, Gregory D.; Kaskassian, Sebastien

    2002-10-01

    A numerical and experimental study of transport phenomena underlying anaerobic bioventing (ABV) is presented. Understanding oxygen exclusion patterns in vadose zone environments is important in designing an ABV process for bioremediation of soil contaminated with chlorinated solvents. In particular, the establishment of an anaerobic zone of influence by nitrogen injection in the vadose zone is investigated. Oxygen exclusion experiments are performed in a pilot scale flow cell (2×1.1×0.1 m) using different venting flows and two different outflow boundary conditions (open and partially covered). Injection gas velocities are varied from 0.25×10 -3 to 1.0×10 -3 cm/s and are correlated with the ABV radius of influence. Numerical simulations are used to predict the collected experimental data. In general, reasonable agreement is found between observed and predicted oxygen concentrations. Use of impervious covers can significantly reduce the volume of forcing gas used, where an increase in oxygen exclusion efficiency is consistent with a decrease in the outflow area above the injection well.

  12. Numerical modeling of oxygen exclusion experiments of anaerobic bioventing.

    PubMed

    Mihopoulos, Philip G; Suidan, Makram T; Sayles, Gregory D; Kaskassian, Sebastien

    2002-10-01

    A numerical and experimental study of transport phenomena underlying anaerobic bioventing (ABV) is presented. Understanding oxygen exclusion patterns in vadose zone environments is important in designing an ABV process for bioremediation of soil contaminated with chlorinated solvents. In particular, the establishment of an anaerobic zone of influence by nitrogen injection in the vadose zone is investigated. Oxygen exclusion experiments are performed in a pilot scale flow cell (2 x 1.1 x 0.1 m) using different venting flows and two different outflow boundary conditions (open and partially covered). Injection gas velocities are varied from 0.25 x 10(-3) to 1.0 x 10(-3) cm/s and are correlated with the ABV radius of influence. Numerical simulations are used to predict the collected experimental data. In general, reasonable agreement is found between observed and predicted oxygen concentrations. Use of impervious covers can significantly reduce the volume of forcing gas used, where an increase in oxygen exclusion efficiency is consistent with a decrease in the outflow area above the injection well.

  13. Modeling ultrafast shadowgraphy in laser-plasma interaction experiments

    NASA Astrophysics Data System (ADS)

    Siminos, E.; Skupin, S.; Sävert, A.; Cole, J. M.; Mangles, S. P. D.; Kaluza, M. C.

    2016-06-01

    Ultrafast shadowgraphy is a new experimental technique that uses few-cycle laser pulses to image density gradients in a rapidly evolving plasma. It enables structures that move at speeds close to the speed of light, such as laser driven wakes, to be visualized. Here we study the process of shadowgraphic image formation during the propagation of a few cycle probe pulse transversely through a laser-driven wake using three-dimensional particle-in-cell simulations. In order to construct synthetic shadowgrams a near-field snapshot of the ultrashort probe pulse is analyzed by means of Fourier optics, taking into account the effect of a typical imaging setup. By comparing synthetic and experimental shadowgrams we show that the generation of synthetic data is crucial for the correct interpretation of experiments. Moreover, we study the dependence of synthetic shadowgrams on various parameters such as the imaging system aperture, the position of the object plane and the probe pulse delay, duration and wavelength. Finally, we show that time-dependent information from the interaction can be recovered from a single shot by using a broadband, chirped probe pulse and subsequent spectral filtering.

  14. FRETsg: Biomolecular structure model building from multiple FRET experiments

    NASA Astrophysics Data System (ADS)

    Schröder, G. F.; Grubmüller, H.

    2004-04-01

    Fluorescence energy transfer (FRET) experiments of site-specifically labelled proteins allow one to determine distances between residues at the single molecule level, which provide information on the three-dimensional structural dynamics of the biomolecule. To systematically extract this information from the experimental data, we describe a program that generates an ensemble of configurations of residues in space that agree with the experimental distances between these positions. Furthermore, a fluctuation analysis allows to determine the structural accuracy from the experimental error. Program summaryTitle of program: FRETsg Catalogue identifier: ADTU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTU Computer: SGI Octane, Pentium II/III, Athlon MP, DEC Alpha Operating system: Unix, Linux, Windows98/NT/XP Programming language used: ANSI C No. of bits in a word: 32 or 64 No. of processors used: 1 No. of bytes in distributed program, including test data, etc.: 11407 No. of lines in distributed program, including test data, etc.: 1647 Distribution format: gzipped tar file Nature of the physical problem: Given an arbitrary number of distance distributions between an arbitrary number of points in three-dimensional space, find all configurations (set of coordinates) that obey the given distances. Method of solution: Each distance is described by a harmonic potential. Starting from random initial configurations, their total energy is minimized by steepest descent. Fluctuations of positions are chosen to generate distance distribution widths that best fit the given values.

  15. Numerical modeling of oxygen exclusion experiments of anaerobic bioventing.

    PubMed

    Mihopoulos, Philip G; Suidan, Makram T; Sayles, Gregory D; Kaskassian, Sebastien

    2002-10-01

    A numerical and experimental study of transport phenomena underlying anaerobic bioventing (ABV) is presented. Understanding oxygen exclusion patterns in vadose zone environments is important in designing an ABV process for bioremediation of soil contaminated with chlorinated solvents. In particular, the establishment of an anaerobic zone of influence by nitrogen injection in the vadose zone is investigated. Oxygen exclusion experiments are performed in a pilot scale flow cell (2 x 1.1 x 0.1 m) using different venting flows and two different outflow boundary conditions (open and partially covered). Injection gas velocities are varied from 0.25 x 10(-3) to 1.0 x 10(-3) cm/s and are correlated with the ABV radius of influence. Numerical simulations are used to predict the collected experimental data. In general, reasonable agreement is found between observed and predicted oxygen concentrations. Use of impervious covers can significantly reduce the volume of forcing gas used, where an increase in oxygen exclusion efficiency is consistent with a decrease in the outflow area above the injection well. PMID:12400833

  16. Experiments and Modeling of High Altitude Chemical Agent Release

    SciTech Connect

    Nakafuji, G.; Greenman, R.; Theofanous, T.

    2002-07-08

    Using ASCA data, we find, contrary to other researchers using ROSAT data, that the X-ray spectra of the VY Scl stars TT Ari and KR Aur are poorly fit by an absorbed blackbody model but are well fit by an absorbed thermal plasma model. The different conclusions about the nature of the X-ray spectrum of KR Aur may be due to differences in the accretion rate, since this Star was in a high optical state during the ROSAT observation, but in an intermediate optical state during the ASCA observation. TT Ari, on the other hand, was in a high optical state during both observations, so directly contradicts the hypothesis that the X-ray spectra of VY Sol stars in their high optical states are blackbodies. Instead, based on theoretical expectations and the ASCA, Chandra, and XMM spectra of other nonmagnetic cataclysmic variables, we believe that the X-ray spectra of VY Sol stars in their low and high optical states are due to hot thermal plasma in the boundary layer between the accretion disk and the surface of the white dwarf, and appeal to the acquisition of Chandra and XMM grating spectra to test this prediction.

  17. Model experiments for the Czochralski crystal growth technique

    NASA Astrophysics Data System (ADS)

    Cramer, A.; Pal, J.; Gerbeth, G.

    2013-03-01

    A lot of the physical and the numerical modeling of Czochralski crystal growth is done on the generic Rayleigh-Bénard system. To better approximate the conditions in a Czochralski puller, the influences of a rounded crucible bottom, deviations of the thermal boundary conditions from the generic case, crucible and/or crystal rotation, and the influence of magnetic fields are often studied separately. The present contribution reviews some of these topics while concentrating on studies of the flow and related temperature fluctuations in systems where a rotating magnetic field (RMF) was applied. The three-dimensional convective patterns and the resulting temperature fluctuations will be discussed both for the mere buoyant case and for the application of an RMF. It is shown that a system between a Rayleigh-Bénard and a more realistic configuration, which is still cylindrical but whose surface is partially covered by a crystal model, behaves much the same as a Rayleigh-Bénard system. An RMF can be used to damp the temperature fluctuations. Secondly, a more Czochralski-like system is examined. It turns out that the RMF does not provide the desired damping of the temperature fluctutions in the parameter range considered.

  18. Assembly and mechanosensory function of focal adhesions: experiments and models.

    PubMed

    Bershadsky, Alexander D; Ballestrem, Christoph; Carramusa, Letizia; Zilberman, Yuliya; Gilquin, Benoit; Khochbin, Saadi; Alexandrova, Antonina Y; Verkhovsky, Alexander B; Shemesh, Tom; Kozlov, Michael M

    2006-04-01

    Initial integrin-mediated cell-matrix adhesions (focal complexes) appear underneath the lamellipodia, in the regions of the "fast" centripetal flow driven by actin polymerization. Once formed, these adhesions convert the flow behind them into a "slow", myosin II-driven mode. Some focal complexes then turn into elongated focal adhesions (FAs) associated with contractile actomyosin bundles (stress fibers). Myosin II inhibition does not suppress formation of focal complexes but blocks their conversion into mature FAs and further FA growth. Application of external pulling force promotes FA growth even under conditions when myosin II activity is blocked. Thus, individual FAs behave as mechanosensors responding to the application of force by directional assembly. We proposed a thermodynamic model for the mechanosensitivity of FAs, taking into account that an elastic molecular aggregate subject to pulling forces tends to grow in the direction of force application by incorporating additional subunits. This simple model can explain a variety of processes typical of FA behavior. Assembly of FAs is triggered by the small G-protein Rho via activation of two major targets, Rho-associated kinase (ROCK) and the formin homology protein, Dia1. ROCK controls creation of myosin II-driven forces, while Dia1 is involved in the response of FAs to these forces. Expression of the active form of Dia1, allows the external force-induced assembly of mature FAs, even in conditions when Rho is inhibited. Conversely, downregulation of Dia1 by siRNA prevents FA maturation even if Rho is activated. Dia1 and other formins cap barbed (fast growing) ends of actin filaments, allowing insertion of the new actin monomers. We suggested a novel mechanism of such "leaky" capping based on an assumption of elasticity of the formin/barbed end complex. Our model predicts that formin-mediated actin polymerization should be greatly enhanced by application of external pulling force. Thus, the formin-actin complex

  19. Assembly and mechanosensory function of focal adhesions: experiments and models.

    PubMed

    Bershadsky, Alexander D; Ballestrem, Christoph; Carramusa, Letizia; Zilberman, Yuliya; Gilquin, Benoit; Khochbin, Saadi; Alexandrova, Antonina Y; Verkhovsky, Alexander B; Shemesh, Tom; Kozlov, Michael M

    2006-04-01

    Initial integrin-mediated cell-matrix adhesions (focal complexes) appear underneath the lamellipodia, in the regions of the "fast" centripetal flow driven by actin polymerization. Once formed, these adhesions convert the flow behind them into a "slow", myosin II-driven mode. Some focal complexes then turn into elongated focal adhesions (FAs) associated with contractile actomyosin bundles (stress fibers). Myosin II inhibition does not suppress formation of focal complexes but blocks their conversion into mature FAs and further FA growth. Application of external pulling force promotes FA growth even under conditions when myosin II activity is blocked. Thus, individual FAs behave as mechanosensors responding to the application of force by directional assembly. We proposed a thermodynamic model for the mechanosensitivity of FAs, taking into account that an elastic molecular aggregate subject to pulling forces tends to grow in the direction of force application by incorporating additional subunits. This simple model can explain a variety of processes typical of FA behavior. Assembly of FAs is triggered by the small G-protein Rho via activation of two major targets, Rho-associated kinase (ROCK) and the formin homology protein, Dia1. ROCK controls creation of myosin II-driven forces, while Dia1 is involved in the response of FAs to these forces. Expression of the active form of Dia1, allows the external force-induced assembly of mature FAs, even in conditions when Rho is inhibited. Conversely, downregulation of Dia1 by siRNA prevents FA maturation even if Rho is activated. Dia1 and other formins cap barbed (fast growing) ends of actin filaments, allowing insertion of the new actin monomers. We suggested a novel mechanism of such "leaky" capping based on an assumption of elasticity of the formin/barbed end complex. Our model predicts that formin-mediated actin polymerization should be greatly enhanced by application of external pulling force. Thus, the formin-actin complex

  20. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model

    NASA Astrophysics Data System (ADS)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.

    2013-12-01

    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient

  1. Performance of Nanotube-Based Ceramic Composites: Modeling and Experiment

    NASA Technical Reports Server (NTRS)

    Curtin, W. A.; Sheldon, B. W.; Xu, J.

    2004-01-01

    The excellent mechanical properties of carbon-nanotubes are driving research into the creation of new strong, tough nanocomposite systems. In this program, our initial work presented the first evidence of toughening mechanisms operating in carbon-nanotube- reinforced ceramic composites using a highly-ordered array of parallel multiwall carbon-nanotubes (CNTs) in an alumina matrix. Nanoindentation introduced controlled cracks and the damage was examined by SEM. These nanocomposites exhibit the three hallmarks of toughening in micron-scale fiber composites: crack deflection at the CNT/matrix interface; crack bridging by CNTs; and CNT pullout on the fracture surfaces. Furthermore, for certain geometries a new mechanism of nanotube collapse in shear bands was found, suggesting that these materials can have multiaxial damage tolerance. The quantitative indentation data and computational models were used to determine the multiwall CNT axial Young's modulus as 200-570 GPa, depending on the nanotube geometry and quality.

  2. [An Experience Promoting the Interdisciplinary Care Model for Dengue Fever].

    PubMed

    Kuo, Wen-Fu; Ke, Ya-Ting

    2016-08-01

    Emergency departments represent the first line in facing major healthcare events. During major epidemic outbreaks, patients crowding into the emergency departments increase the wait time for patients and overload the staffs that are on duty. The dengue fever outbreak in southern Taiwan during the summer 2015 presented a huge management challenge for physicians and nurses in local hospitals. We responded to this challenge by integrating resources from different hospital departments. This strategy successfully increased group cohesiveness among the medical team, ensuring that they could not only ultimately cope with the outbreak together but also effectively provide patient-centered care. This interdisciplinary care model may serve as a reference for medical professionals for the management of future epidemics and similar events. PMID:27492305

  3. Model-based localization for a shallow ocean experiment

    SciTech Connect

    Candy, J.V.; Sullivan, E.J.

    1995-07-19

    In this paper a modern approach was developed to solve the passive localization problem in ocean acoustics using the state-space formulation. It is shown that the inherent structure of the resulting processor consists of a parameter estimator coupled to a nonlinear optimization scheme. The parameter estimator is design using an acoustic propagation model in developing the modern identifier required for localization. The detection and localization of an acoustic source has long been the motivation of early sonar systems. With the advent of quieter and quieter submarines due to new manufacturing technologies and the next proliferation of diesel powered vessels, the need for more sophisticated processing techniques has been apparent for quite some time.

  4. ALEGRA modeling of gas puff Z-pinch experiments at the ZR facility.

    SciTech Connect

    Coleman, P. L.; Flicker, Dawn G.; Coverdale, Christine Anne; Kueny, Christopher Shane; Krishnan, Mahadevan

    2010-11-01

    Gas puff z-pinch experiments have been proposed for the refurbished Z (ZR) facility for CY2011. Previous gas puff experiments [Coverdale et. al., Phys. Plasmas 14, 056309, 2007] on pre-refurbishment Z established a world record for laboratory fusion neutron yield. New experiments would establish ZR gas puff capability for x-ray and neutron production and could surpass previous yields. We present validation of ALEGRA simulations against previous Z experiments including X-ray and neutron yield, modeling of gas puff implosion dynamics for new gas puff nozzle designs, and predictions of X-ray and neutron yields for the proposed gas puff experiments.

  5. INNOVATIVE FRESH WATER PRODUCTION PROCESS FOR FOSSIL FUEL PLANTS

    SciTech Connect

    James F. Klausner; Renwei Mei; Yi Li; Mohamed Darwish; Diego Acevedo; Jessica Knight

    2003-09-01

    This report describes the annual progress made in the development and analysis of a Diffusion Driven Desalination (DDD) system, which is powered by the waste heat from low pressure condensing steam in power plants. The desalination is driven by water vapor saturating dry air flowing through a diffusion tower. Liquid water is condensed out of the air/vapor mixture in a direct contact condenser. A thermodynamic analysis demonstrates that the DDD process can yield a fresh water production efficiency of 4.5% based on a feed water inlet temperature of only 50 C. An example is discussed in which the DDD process utilizes waste heat from a 100 MW steam power plant to produce 1.51 million gallons of fresh water per day. The main focus of the initial development of the desalination process has been on the diffusion tower. A detailed mathematical model for the diffusion tower has been described, and its numerical implementation has been used to characterize its performance and provide guidance for design. The analysis has been used to design a laboratory scale diffusion tower, which has been thoroughly instrumented to allow detailed measurements of heat and mass transfer coefficient, as well as fresh water production efficiency. The experimental facility has been described in detail.

  6. Relativistic Models for the BepiColombo Radioscience Experiment

    NASA Astrophysics Data System (ADS)

    Milani, Andrea

    2009-05-01

    For the dynamics we start from the Lagrangian Post-Newtonian formulation, using a relativistic equation for the solar system barycenter to avoid rank deficiency. For the determination of the PN parameters the difficulty, already reported in (Milani et al., Phis. Rev. D 2002), in disentagling the effects of beta from the ones of the Sun's oblateness is confirmed. We have found a consistent formulation for the preferred frame effects, altough a barycenter is not well defined. For the identification of SEP violations we use a formulation containing both direct and indirect effects (through the modified position of the Sun in a barycentric frame). We report on our methods for validation tests and algorithm certification. The light-time implicit equations are solved with iterative loops, the Shapiro effect is modeled to PN order 1 but with an order 2 correction as recently computed with different methods by several authors, which is compatible with Moyer's. We have also tested the 1.5 PN order corrections due to the motion of the Sun and found they are not relevant at the required level of accuracy. The integrated range-rate observable has been smoothed by an averaging technique removing the well known numerical instability problems. To model the orbit of the probe, we use a mercurycentric reference frame with its own "Mercury Dynamical Time": this is the largest and the only relativistic correction required, taking into account the major uncertainties introduced by non-gravitational perturbations (mostly as uncertainty in accelerometer calibrations). A delicate issue is the compatibility of our solution with the ephemerides for the other planets, and for the Moon, which presumably cannot be improved by the BepiColombo data alone. On the other hand, we plan to later export the BepiColombo measurements, as normal points, to contribute with their unprecedented accuracy to the global improvement of planetary ephemerides.

  7. A novel nutrition medicine education model: the Boston University experience.

    PubMed

    Lenders, Carine; Gorman, Kathy; Milch, Hannah; Decker, Ashley; Harvey, Nanette; Stanfield, Lorraine; Lim-Miller, Aimee; Salge-Blake, Joan; Judd, Laura; Levine, Sharon

    2013-01-01

    Most deaths in the United States are preventable and related to nutrition. Although physicians are expected to counsel their patients about nutrition-related health conditions, a recent survey reported minimal improvements in nutrition medicine education in US medical schools in the past decade. Starting in 2006, we have developed an educational plan using a novel student-centered model of nutrition medicine education at Boston University School of Medicine that focuses on medical student-mentored extracurricular activities to develop, evaluate, and sustain nutrition medicine education. The medical school uses a team-based approach focusing on case-based learning in the classroom, practice-based learning in the clinical setting, extracurricular activities, and a virtual curriculum to improve medical students' knowledge, attitudes, and practice skills across their 4-y period of training. We have been using objectives from the NIH National Academy Awards guide and tools from the Association of American Medical Colleges to detect new areas of nutrition medicine taught at the medical school. Although we were only able to identify 20.5 h of teaching in the preclerkship years, we observed that most preclerkship nutrition medicine objectives were covered during the course of the 4-y teaching period, and extracurricular activities provided new opportunities for student leadership and partnership with other health professionals. These observations are very encouraging as new assessment tools are being developed. Future plans include further evaluation and dissemination of lessons learned using this model to improve public health wellness with support from academia, government, industry, and foundations.

  8. A novel nutrition medicine education model: the Boston University experience.

    PubMed

    Lenders, Carine; Gorman, Kathy; Milch, Hannah; Decker, Ashley; Harvey, Nanette; Stanfield, Lorraine; Lim-Miller, Aimee; Salge-Blake, Joan; Judd, Laura; Levine, Sharon

    2013-01-01

    Most deaths in the United States are preventable and related to nutrition. Although physicians are expected to counsel their patients about nutrition-related health conditions, a recent survey reported minimal improvements in nutrition medicine education in US medical schools in the past decade. Starting in 2006, we have developed an educational plan using a novel student-centered model of nutrition medicine education at Boston University School of Medicine that focuses on medical student-mentored extracurricular activities to develop, evaluate, and sustain nutrition medicine education. The medical school uses a team-based approach focusing on case-based learning in the classroom, practice-based learning in the clinical setting, extracurricular activities, and a virtual curriculum to improve medical students' knowledge, attitudes, and practice skills across their 4-y period of training. We have been using objectives from the NIH National Academy Awards guide and tools from the Association of American Medical Colleges to detect new areas of nutrition medicine taught at the medical school. Although we were only able to identify 20.5 h of teaching in the preclerkship years, we observed that most preclerkship nutrition medicine objectives were covered during the course of the 4-y teaching period, and extracurricular activities provided new opportunities for student leadership and partnership with other health professionals. These observations are very encouraging as new assessment tools are being developed. Future plans include further evaluation and dissemination of lessons learned using this model to improve public health wellness with support from academia, government, industry, and foundations. PMID:23319117

  9. Two phase discharge of liquefied gases through pipes. Field experiments with ammonia and theoretical model

    NASA Astrophysics Data System (ADS)

    Nyren, K.; Winter, S.

    1984-01-01

    Field experiments with full scale releases of pressurized through siphon pipes from a storage tank were performed. It is found that the flow is a damped critical flow causing a violent turbulent spray jet. The pronounced atomization of the liquid and the quick air entrainment prevent rainout and no traces of land spills are observed. A theoretical model is also presented. Comparisons with the field experiments and laboratory experiments show that the model gives very good predictions of the mass flow rate and the jet determining parameters. The model is useful also for long pipe systems as it takes into account friction and other resistances.

  10. The influence of bone density and anisotropy in finite element models of distal radius fracture osteosynthesis: Evaluations and comparison to experiments.

    PubMed

    Synek, A; Chevalier, Y; Baumbach, S F; Pahr, D H

    2015-11-26

    Continuum-level finite element (FE) models can be used to analyze and improve osteosynthesis procedures for distal radius fractures (DRF) from a biomechanical point of view. However, previous models oversimplified the bone material and lacked thorough experimental validation. The goal of this study was to assess the influence of local bone density and anisotropy in FE models of DRF osteosynthesis for predictions of axial stiffness, implant plate stresses, and screw loads. Experiments and FE analysis were conducted in 25 fresh frozen cadaveric radii with DRFs treated by volar locking plate osteosynthesis. Specimen specific geometries were captured using clinical quantitative CT (QCT) scans of the prepared samples. Local bone material properties were computed based on high resolution CT (HR-pQCT) scans of the intact radii. The axial stiffness and individual screw loads were evaluated in FE models, with (1) orthotropic inhomogeneous (OrthoInhom), (2) isotropic inhomogeneous (IsoInhom), and (3) isotropic homogeneous (IsoHom) bone material and compared to the experimental axial stiffness and screw-plate interface failures. FE simulated and experimental axial stiffness correlated significantly (p<0.0001) for all three model types. The coefficient of determination was similar for OrthoInhom (R(2)=0.807) and IsoInhom (R(2)=0.816) models but considerably lower for IsoHom models (R(2)=0.500). The peak screw loads were in qualitative agreement with experimental screw-plate interface failure. Individual loads and implant plate stresses of IsoHom models differed significantly (p<0.05) from OrthoInhom and IsoInhom models. In conclusion, including local bone density in FE models of DRF osteosynthesis is essential whereas local bone anisotropy hardly effects the models׳ predictive abilities.

  11. Experiments for calibration and validation of plasticity and failure material modeling: 304L stainless steel.

    SciTech Connect

    Lee, Kenneth L.; Korellis, John S.; McFadden, Sam X.

    2006-01-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.

  12. SSC dipole log manget model cryostat design and initial production experience

    SciTech Connect

    Niemann, R.C.; Carson, J.A.; Engler, N.H.; Gonczy, J.D.; Nicol, T.H.

    1986-06-01

    The SSC dipole magnet development program includes the design and construction of full length magnet models for heat leak and magnetic measurements and for the evaluation of the performance of strings of magnets. The design of the model magnet cryostat is presented and the production experiences for the initial long magnet model, a heat leak measurement device, are related.

  13. Photochemical formation of oligopepides in the space processes modeling experiments

    NASA Astrophysics Data System (ADS)

    Gontareva, N.; Kuzicheva, E.

    The origin of life may be considered as a sequence of events, each of which adds to the molecular complexity and order. Abundance of chemical building blocks synthesized previously is a prerequisite for the beginning of the next evolutionary step. That's why the availability of chemical building blocks for peptides and nucleic acids, their formation and resistance towards destructive impacts should not be underestimated. At the number of our previous publications it was reported about the oligopeptides and nucleotides formation promoted by cosmic energy sources at the terrestrial orbit. The next important question to solve was whether mineral beds could have protected these biological important substances from the destructive action of space radiation. Minerals of extraterrestrial origin, such as lunar soil, Allende and Murchison meteorites, were tested with respect of their protective properties. UV radiation of short wavelength (the most abundant and powerful source of energy during the period of early chemical evolution) was taken as the energy supplier triggering the solid phase synthesis process. Solid films consisting out of two aminoacids (Gly + Phe) were irradiated by UV 254 in course of the experiment - free or associat ed with minerals. As the result of exposure process, both dipeptides and tripeptides of certain aminoacids were identified. Presence of inorganic beds during irradiation increased the yield of oligopeptides. For lunar soil, the yield increased by 3.8 times, while in case of meteorite beds the rate between free and associated with clay samples was 1.4. These data correlate with our previous results on VUV 145 irradiation of the same mixture (Gly + Phe), irradiation dose 2.12*106 J m-2. The same set of products was traced in both experimental cases, while the quantitative yield of products was one order of magnitude less for the case of UV254 and the relevant irradiation dosage much higher - 2.64*107 J m -2. It should be considered, that at

  14. Variable recruitment fluidic artificial muscles: modeling and experiments

    NASA Astrophysics Data System (ADS)

    Bryant, Matthew; Meller, Michael A.; Garcia, Ephrahim

    2014-07-01

    We investigate taking advantage of the lightweight, compliant nature of fluidic artificial muscles to create variable recruitment actuators in the form of artificial muscle bundles. Several actuator elements at different diameter scales are packaged to act as a single actuator device. The actuator elements of the bundle can be connected to the fluidic control circuit so that different groups of actuator elements, much like individual muscle fibers, can be activated independently depending on the required force output and motion. This novel actuation concept allows us to save energy by effectively impedance matching the active size of the actuators on the fly based on the instantaneous required load. This design also allows a single bundled actuator to operate in substantially different force regimes, which could be valuable for robots that need to perform a wide variety of tasks and interact safely with humans. This paper proposes, models and analyzes the actuation efficiency of this actuator concept. The analysis shows that variable recruitment operation can create an actuator that reduces throttling valve losses to operate more efficiently over a broader range of its force-strain operating space. We also present preliminary results of the design, fabrication and experimental characterization of three such bioinspired variable recruitment actuator prototypes.

  15. Diurnal heating of ocean: Experiments and model calculations

    SciTech Connect

    Tsvetkov, A.V.; Kudryavtsev, Y.N.; Grodsky, S.A.

    1994-12-31

    Presented are the results of investigation of the ocean upper layer diurnal heating, when absorption of isolation in the near-surface layer leads to formation of the diurnal thermocline. Turbulence suppression below the heated layer leads to the fact that the impulse incoming from the atmosphere does not propagate below the diurnal thermocline in which main current velocity shears are concentrated. Motion of this layer is determined by wind stress and the Coriolis force. At evening time due to growth of the heating layer depth it starts to decelerate fast. Groups of internal waves are recorded in the diurnal thermocline. The thermocline shear instability may be the source of their generation. Experimental data are analyzed by the model based on the concept of the diurnal thermocline shear instability. The combined analysis permitted us to reveal peculiarities of the heating layer dynamics in wide range of wind velocities and the Coriolis parameter, and also to obtain semi-empirical dependencies appropriate for its realization.

  16. Antibacterial kaolinite/urea/chlorhexidine nanocomposites: Experiment and molecular modelling

    NASA Astrophysics Data System (ADS)

    Holešová, Sylva; Valášková, Marta; Hlaváč, Dominik; Madejová, Jana; Samlíková, Magda; Tokarský, Jonáš; Pazdziora, Erich

    2014-06-01

    Clay minerals are commonly used materials in pharmaceutical production both as inorganic carriers or active agents. The purpose of this study is the preparation and characterization of clay/antibacterial drug hybrids which can be further included in drug delivery systems for treatment oral infections. Novel nanocomposites with antibacterial properties were successfully prepared by ion exchange reaction from two types of kaolinite/urea intercalates and chlorhexidine diacetate. Intercalation compounds of kaolinite were prepared by reaction with solid urea in the absence of solvents (dry method) as well as with urea aqueous solution (wet method). The antibacterial activity of two prepared samples against Enterococcus faecalis, Staphylococcus aureus, Escherichia coli and Pseudomonas aeruginosa was evaluated by finding the minimum inhibitory concentration (MIC). Antibacterial studies of both samples showed the lowest MIC values (0.01%, w/v) after 1 day against E. faecalis, E. coli and S. aureus. A slightly worse antibacterial activity was observed against P. aeruginosa (MIC 0.12%, w/v) after 1 day. Since samples showed very good antibacterial activity, especially after 1 day of action, this means that these samples can be used as long-acting antibacterial materials. Prepared samples were characterized by X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR). The experimental data are supported by results of molecular modelling.

  17. Modeling Nonlinear Acoustic Standing Waves in Resonators: Theory and Experiments

    NASA Technical Reports Server (NTRS)

    Raman, Ganesh; Li, Xiaofan; Finkbeiner, Joshua

    2004-01-01

    The overall goal of the cooperative research with NASA Glenn is to fundamentally understand, computationally model, and experimentally validate non-linear acoustic waves in enclosures with the ultimate goal of developing a non-contact acoustic seal. The longer term goal is to transition the Glenn acoustic seal innovation to a prototype sealing device. Lucas and coworkers are credited with pioneering work in Resonant Macrosonic Synthesis (RMS). Several Patents and publications have successfully illustrated the concept of Resonant Macrosonic Synthesis. To utilize this concept in practical application one needs to have an understanding of the details of the phenomenon and a predictive tool that can examine the waveforms produced within resonators of complex shapes. With appropriately shaped resonators one can produce un-shocked waveforms of high amplitude that would result in very high pressures in certain regions. Our goal is to control the waveforms and exploit the high pressures to produce an acoustic seal. Note that shock formation critically limits peak-to-peak pressure amplitudes and also causes excessive energy dissipation. Proper shaping of the resonator is thus critical to the use of this innovation.

  18. Acoustic coupling in capacitive microfabricated ultrasonic transducers: modeling and experiments.

    PubMed

    Caronti, Alessandro; Savoia, Alessandro; Caliano, Giosuè; Pappalardo, Massimo

    2005-12-01

    In the design of low-frequency transducer arrays for active sonar systems, the acoustic interactions that occur between the transducer elements have received much attention. Because of these interactions, the acoustic loading on each transducer depends on its position in the array, and the radiated acoustic power may vary considerably from one element to another. Capacitive microfabricated ultrasonic transducers (CMUT) are made of a two-dimensional array of metallized micromembranes, all electrically connected in parallel, and driven into flexural motion by the electrostatic force produced by an applied voltage. The mechanical impedance of these membranes is typically much lower than the acoustic impedance of water. In our investigations of acoustic coupling in CMUTs, interaction effects between the membranes in immersion were observed, similar to those reported in sonar arrays. Because CMUTs have many promising applications in the field of medical ultrasound imaging, understanding of cross-coupling mechanisms and acoustic interaction effects is especially important for reducing cross-talk between array elements, which can produce artifacts and degrade image quality. In this paper, we report a finite-element study of acoustic interactions in CMUTs and experimental results obtained by laser interferometry measurements. The good agreement found between finite element modeling (FEM) results and optical displacement measurements demonstrates that acoustic interactions through the liquid represent a major source of cross coupling in CMUTs.

  19. Inhalational anaesthesia with low fresh gas flow

    PubMed Central

    Hönemann, Christian; Hagemann, Olaf; Doll, Dietrich

    2013-01-01

    During the inhalation of anaesthesia use of low fresh gas flow (0.35-1 L/min) has some important advantages. There are three areas of benefit: pulmonary - anaesthesia with low fresh gas flow improves the dynamics of inhaled anaesthesia gas, increases mucociliary clearance, maintains body temperature and reduces water loss. Economic - reduction of anaesthesia gas consumption resulting in significant savings of > 75% and Ecological - reduction in nitrous oxide consumption, which is an important ozone-depleting and heat-trapping greenhouse gas that is emitted. Nevertheless, anaesthesia with high fresh gas flows of 2-6 L/min is still performed, a technique in which rebreathing is practically negligible. This special article describes the clinical use of conventional plenum vaporizers, connected to the fresh gas supply to easily perform low (1 L/min), minimal (0.5 L/min) or metabolic flow anaesthesia (0.35 L/min) with conventional Primus Draeger® anaesthesia machines in routine clinical practice. PMID:24163447

  20. Storytime with Fresh Professor, Part One

    ERIC Educational Resources Information Center

    Miles, James

    2016-01-01

    James Miles writes that he wasn't always the Fresh Professor. At one point, he was just another starving actor, trying to make a living. But stories change over time, as do professional desires. This article presents Part One of his story.