Science.gov

Sample records for experiment model fresh

  1. Direct-contact condensers for open-cycle OTEC applications: Model validation with fresh water experiments for structured packings

    SciTech Connect

    Bharathan, D.; Parsons, B.K.; Althof, J.A.

    1988-10-01

    The objective of the reported work was to develop analytical methods for evaluating the design and performance of advanced high-performance heat exchangers for use in open-cycle thermal energy conversion (OC-OTEC) systems. This report describes the progress made on validating a one-dimensional, steady-state analytical computer of fresh water experiments. The condenser model represents the state of the art in direct-contact heat exchange for condensation for OC-OTEC applications. This is expected to provide a basis for optimizing OC-OTEC plant configurations. Using the model, we examined two condenser geometries, a cocurrent and a countercurrent configuration. This report provides detailed validation results for important condenser parameters for cocurrent and countercurrent flows. Based on the comparisons and uncertainty overlap between the experimental data and predictions, the model is shown to predict critical condenser performance parameters with an uncertainty acceptable for general engineering design and performance evaluations. 33 refs., 69 figs., 38 tabs.

  2. Direct-contact condensers for open-cycle OTEC applications: Model validation with fresh water experiments for structured packings

    NASA Astrophysics Data System (ADS)

    Bharathan, D.; Parsons, B. K.; Althof, J. A.

    1988-10-01

    The objective of the reported work was to develop analytical methods for evaluating the design and performance of advanced high-performance heat exchangers for use in open-cycle thermal energy conversion (OC-OTEC) systems. This report describes the progress made on validating a one-dimensional, steady-state analytical computer of fresh water experiments. The condenser model represents the state of the art in direct-contact heat exchange for condensation for OC-OTEC applications. This is expected to provide a basis for optimizing OC-OTEC plant configurations. Using the model, we examined two condenser geometries, a cocurrent and a countercurrent configuration. This report provides detailed validation results for important condenser parameters for cocurrent and countercurrent flows. Based on the comparisons and uncertainty overlap between the experimental data and predictions, the model is shown to predict critical condenser performance parameters with an uncertainty acceptable for general engineering design and performance evaluations.

  3. Global modeling of fresh surface water temperature

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Eikelboom, T.; van Vliet, M. T.; Van Beek, L. P.

    2011-12-01

    Temperature determines a range of water physical properties, the solubility of oxygen and other gases and acts as a strong control on fresh water biogeochemistry, influencing chemical reaction rates, phytoplankton and zooplankton composition and the presence or absence of pathogens. Thus, in freshwater ecosystems the thermal regime affects the geographical distribution of aquatic species through their growth and metabolism, tolerance to parasites, diseases and pollution and life history. Compared to statistical approaches, physically-based models of surface water temperature have the advantage that they are robust in light of changes in flow regime, river morphology, radiation balance and upstream hydrology. Such models are therefore better suited for projecting the effects of global change on water temperature. Till now, physically-based models have only been applied to well-defined fresh water bodies of limited size (e.g., lakes or stream segments), where the numerous parameters can be measured or otherwise established, whereas attempts to model water temperature over larger scales has thus far been limited to regression type of models. Here, we present a first attempt to apply a physically-based model of global fresh surface water temperature. The model adds a surface water energy balance to river discharge modelled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by short and long-wave radiation and sensible and latent heat fluxes. Also included are ice-formation and its effect on heat storage and river hydraulics. We used the coupled surface water and energy balance model to simulate global fresh surface water temperature at daily time steps on a 0.5x0.5 degree grid for the period 1970-2000. Meteorological forcing was obtained from the CRU data set, downscaled to daily values with ECMWF

  4. Assessment of the recycling potential of fresh concrete waste using a factorial design of experiments.

    PubMed

    Correia, S L; Souza, F L; Dienstmann, G; Segadães, A M

    2009-11-01

    Recycling of industrial wastes and by-products can help reduce the cost of waste treatment prior to disposal and eventually preserve natural resources and energy. To assess the recycling potential of a given waste, it is important to select a tool capable of giving clear indications either way, with the least time and work consumption, as is the case of modelling the system properties using the results obtained from statistical design of experiments. In this work, the aggregate reclaimed from the mud that results from washout and cleaning operations of fresh concrete mixer trucks (fresh concrete waste, FCW) was recycled into new concrete with various water/cement ratios, as replacement of natural fine aggregates. A 3(2) factorial design of experiments was used to model fresh concrete consistency index and hardened concrete water absorption and 7- and 28-day compressive strength, as functions of FCW content and water/cement ratio, and the resulting regression equations and contour plots were validated with confirmation experiments. The results showed that the fresh concrete workability worsened with the increase in FCW content but the water absorption (5-10 wt.%), 7-day compressive strength (26-36 MPa) and 28-day compressive strength (32-44 MPa) remained within the specified ranges, thus demonstrating that the aggregate reclaimed from FCW can be recycled into new concrete mixtures with lower natural aggregate content.

  5. Osmotic Power: A Fresh Look at an Old Experiment

    ERIC Educational Resources Information Center

    Dugdale, Pam

    2014-01-01

    Electricity from osmotic pressure might seem a far-fetched idea but this article describes a prototype in Norway where the osmotic pressure generated between salt and fresh water drives a turbine. This idea was applied in a student investigation, where they were tasked with researching which alternative materials could be used for the…

  6. Desorption isotherms for fresh beef: an experimental and modeling approach.

    PubMed

    Ahmat, Tom; Bruneau, Denis; Kuitche, Alexis; Waste Aregba, Aworou

    2014-04-01

    Desorption isotherms for fresh beef were determined at 30, 40 and 50°C by the static gravimetric method. The resulting isotherms exhibited a type II sigmoid shape. The BET, GAB and Halsey models were used to fit these experimental data. The GAB model was most accurate for all temperatures and all levels of water activity, followed by the BET and Halsey models. The temperature dependence of GAB constants was estimated. The isosteric heat of desorption and its evolution in relation to moisture content were calculated using Clausius-Clapeyron equations. The monolayer moisture content was determined using the GAB model: it decreased as the temperature increased. The density of bound water, the number of adsorption sites, the sorption surface area and the percentage of bound water were calculated using the Caurie equation: all these quantities decreased as the temperature increased. The Kelvin and Halsey equations were used for calculation of pore size, which increases with an increase in moisture levels and sorption temperature.

  7. Methodology for modeling the migration of EOR chemicals in fresh water aquifers

    SciTech Connect

    Royce, B.; Garrell, M.; Kahn, A.; Kaplan, E.

    1983-11-01

    The objective of this study is to develop a method for modeling the transport of EOR chemicals accidentally released to fresh water aquifers. Six examples involving hypothetical releases of EOR chemicals at surrogate aquifer sites are used to illustrate the application of this method. Typical injection rates and concentrations of EOR chemicals used at current or proposed projects were obtained from the literature and used as the basis for the hypothetical accidents. Four surrogate aquifer sites were selected from States where chemical flooding methods are employed. Each site is based on real hydrological data but presented in such a way to avoid identification with existing EOR fields. A significant amount of data is required to model ground water systems. The hypothetical examples help to indicate the type of data needed. The computer results illustrate that high levels of contamination are possible for many years. In addition, due to these high levels of contamination, it is possible for contaminants to migrate offsite of the EOR field. There are a variety of pathways through which EOR chemicals could be accidentally released to fresh water aquifers during normal EOR operations. There is insufficient EOR experience to date, however, to forecast risks accurately. 119 references, 10 figures, 9 tables.

  8. Analysis of fresh fuel critical experiments appropriate for burnup credit validation

    SciTech Connect

    DeHart, M.D.; Bowman, S.M.

    1995-10-01

    The ANS/ANS-8.1 standard requires that calculational methods used in determining criticality safety limits for applications outside reactors be validated by comparison with appropriate critical experiments. This report provides a detailed description of 34 fresh fuel critical experiments and their analyses using the SCALE-4.2 code system and the 27-group ENDF/B-IV cross-section library. The 34 critical experiments were selected based on geometry, material, and neutron interaction characteristics that are applicable to a transportation cask loaded with pressurized-water-reactor spent fuel. These 34 experiments are a representative subset of a much larger data base of low-enriched uranium and mixed-oxide critical experiments. A statistical approach is described and used to obtain an estimate of the bias and uncertainty in the calculational methods and to predict a confidence limit for a calculated neutron multiplication factor. The SCALE-4.2 results for a superset of approximately 100 criticals are included in uncertainty analyses, but descriptions of the individual criticals are not included.

  9. Numerical Modelling and Ejecta Distribution Analysis of a Martian Fresh Crater

    NASA Astrophysics Data System (ADS)

    Lucchetti, A.; Cremonese, G.; Cambianica, P.; Daubar, I.; McEwen, A. S.; Re, C.

    2015-12-01

    Images taken by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter reveal fresh craters on Mars that are known to be recent as they are constrained by before and after images (Daubar et al., 2013). In particular, on Nov. 19, 2013 an image acquired by HiRISE, ESP_034285_1835, observed a 25 m diameter fresh crater located at 3.7° N, 53.4° E. This impact occurred between July 2010 and May 2012, as constrained by Context camera (CTX) images. Because the terrain where the crater formed is dusty, the fresh crater appears blue in the enhanced color of the HiRISE image, due to removal of the reddish dust in that area. We analyze this crater using the iSALE shock physics code (Amsden et al., 1980, Collins et al., 2004, Ivanov et al., 1997, Melosh et al., 1992, Wünnemann et al., 2006) to model the formation of this impact structure which is ~25 m in diameter and ~ 2.5 - 3 m in depth. These values are obtained from the DTM profile we have generated. We model the Martian surface considering different target compositions as regolith and fractured basalt rock and we based our simulations on a basalt projectile with a porosity of 10% (which is derived from the average of the meteorite types proposed by Britt et al., 2002) that hits the Martian surface with a beginning velocity equal to 7 km/s (Le Feuvre & Wieczorek, 2011) and an impact angle of 90°. The projectile size is around 1 m and it is estimated from the comparison between the DTM profile and the profiles obtained by numerical modelling. The primary objective of this analysis is the detailed study of the ejecta, in fact we will track the ejecta coming from the simulation and compare them to the ejecta distribution computed on the image (the ejecta reached a distance of more than 15 km). From the matching of the simulated ejecta with their real distribution, we will be able to understand the goodness of the simulation and also put constraints on the target material.

  10. Modeling fresh water lens damage and recovery on atolls after storm-wave washover.

    PubMed

    Chui, Ting Fong May; Terry, James P

    2012-01-01

    The principal natural source of fresh water on scattered coral atolls throughout the tropical Pacific Ocean is thin unconfined groundwater lenses within islet substrates. Although there are many threats to the viability of atoll fresh water lenses, salinization caused by large storm waves washing over individual atoll islets is poorly understood. In this study, a mathematical modeling approach is used to examine the immediate responses, longer-term behavior, and subsequent (partial) recovery of a Pacific atoll fresh water lens after saline damage caused by cyclone-generated wave washover under different scenarios. Important findings include: (1) the saline plume formed by a washover event mostly migrates downward first through the top coral sand and gravel substrate, but then exits the aquifer to the ocean laterally through the more permeable basement limestone; (2) a lower water table position before the washover event, rather than a longer duration of storm washover, causes more severe damage to the fresh water lens; (3) relatively fresher water can possibly be found as a preserved horizon in the deeper part of an aquifer after disturbance, especially if the fresh water lens extends into the limestone under normal conditions; (4) post-cyclone accumulation of sea water in the central depression (swamp) of an atoll islet prolongs the later stage of fresh water lens recovery.

  11. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  12. Evaluation of hands-on seminar for reduced port surgery using fresh porcine cadaver model

    PubMed Central

    Poudel, Saseem; Kurashima, Yo; Shichinohe, Toshiaki; Kitashiro, Shuji; Kanehira, Eiji; Hirano, Satoshi

    2016-01-01

    BACKGROUND: The use of various biological and non-biological simulators is playing an important role in training modern surgeons with laparoscopic skills. However, there have been few reports of the use of a fresh porcine cadaver model for training in laparoscopic surgical skills. The purpose of this study was to report on a surgical training seminar on reduced port surgery using a fresh cadaver porcine model and to assess its feasibility and efficacy. MATERIALS AND METHODS: The hands-on seminar had 10 fresh porcine cadaver models and two dry boxes. Each table was provided with a unique access port and devices used in reduced port surgery. Each group of 2 surgeons spent 30 min at each station, performing different tasks assisted by the instructor. The questionnaire survey was done immediately after the seminar and 8 months after the seminar. RESULTS: All the tasks were completed as planned. Both instructors and participants were highly satisfied with the seminar. There was a concern about the time allocated for the seminar. In the post-seminar survey, the participants felt that the number of reduced port surgeries performed by them had increased. CONCLUSION: The fresh cadaver porcine model requires no special animal facility and can be used for training in laparoscopic procedures. PMID:27279391

  13. Fresh water balance of the Gulf Stream system in a regional model study

    NASA Astrophysics Data System (ADS)

    Gerdes, R.; Biastoch, A.; Redler, R.

    We investigate the dependence of surface fresh water fluxes in the Gulf Stream and North Atlantic Current (NAC) area on the position of the stream axis which is not well represented in most ocean models. To correct this shortcoming, strong unrealistic surface fresh water fluxes have to be applied that lead to an incorrect salt balance of the current system. The unrealistic surface fluxes required by the oceanic component may force flux adjustments and may cause fictitious long-term variability in coupled climate models. To identify the important points in the correct representation of the salt balance of the Gulf Stream a regional model of the northwestern part of the subtropical gyre has been set up. Sensitivity studies are made where the westward flow north of the Gulf Stream and its properties are varied. Increasing westward volume transport leads to a southward migration of the Gulf Stream separation point along the American coast. The salinity of the inflow is essential for realistic surface fresh water fluxes and the water mass distribution. The subpolar-subtropical connection is important in two ways: The deep dense flow from the deep water mass formation areas sets up the cyclonic circulation cell north of the Gulf Stream. The surface and mid depth flow of fresh water collected at high northern latitudes is mixed into the Gulf Stream and compensates for the net evaporation at the surface.

  14. Hydrochemical Impacts of CO2 Leakage on Fresh Groundwater: a Field Scale Experiment

    NASA Astrophysics Data System (ADS)

    Lions, J.; Gal, F.; Gombert, P.; Lafortune, S.; Darmoul, Y.; Prevot, F.; Grellier, S.; Squarcioni, P.

    2013-12-01

    One of the questions related to the emerging technology for Carbon Geological Storage concerns the risk of CO2 migration beyond the geological storage formation. In the event of leakage toward the surface, the CO2 might affect resources in neighbouring formations (geothermal or mineral resources, groundwater) or even represent a hazard for human activities at the surface or in the subsurface. In view of the preservation of the groundwater resources mainly for human consumption, this project studies the potential hydrogeochemical impacts of CO2 leakage on fresh groundwater quality. One of the objectives is to characterize the bio-geochemical mechanisms that may impair the quality of fresh groundwater resources in case of CO2 leakage. To reach the above mentioned objectives, this project proposes a field experiment to characterize in situ the mechanisms that could impact the water quality, the CO2-water-rock interactions and also to improve the monitoring methodology by controlled CO2 leakage in shallow aquifer. The tests were carried out in an experimental site in the chalk formation of the Paris Basin. The site is equipped with an appropriate instrumentation and was previously characterized (8 piezometers, 25 m deep and 4 piezairs 11 m deep). The injection test was preceded by 6 months of monitoring in order to characterize hydrodynamics and geochemical baselines of the site (groundwater, vadose and soil). Leakage into groundwater is simulated via the injection of a small quantity of food-grade CO2 (~20 kg dissolved in 10 m3 of water) in the injection well at a depth of about 20 m. A plume of dissolved CO2 is formed and moves downward according to the direction of groundwater flow and probably by degassing in part to the surface. During the injection test, hydrochemical monitoring of the aquifer is done in situ and by sampling. The parameters monitored in the groundwater are the piezometric head, temperature, pH and electrical conductivity. Analysis on water

  15. Measured and Modeled Humidification Factors of Fresh Smoke Particles From Biomass Burning: Role of Inorganic Constituents

    SciTech Connect

    Hand, Jenny L.; Day, Derek E.; McMeeking, Gavin M.; Levin, Ezra; Carrico, Christian M.; Kreidenweis, Sonia M.; Malm, William C.; Laskin, Alexander; Desyaterik, Yury

    2010-07-09

    During the 2006 FLAME study (Fire Laboratory at Missoula Experiment), laboratory burns of biomass fuels were performed to investigate the physico-chemical, optical, and hygroscopic properties of fresh biomass smoke. As part of the experiment, two nephelometers simultaneously measured dry and humidified light scattering coefficients (bsp(dry) and bsp(RH), respectively) in order to explore the role of relative humidity (RH) on the optical properties of biomass smoke aerosols. Results from burns of several biomass fuels showed large variability in the humidification factor (f(RH) = bsp(RH)/bsp(dry)). Values of f(RH) at RH=85-90% ranged from 1.02 to 2.15 depending on fuel type. We incorporated measured chemical composition and size distribution data to model the smoke hygroscopic growth to investigate the role of inorganic and organic compounds on water uptake for these aerosols. By assuming only inorganic constituents were hygroscopic, we were able to model the water uptake within experimental uncertainty, suggesting that inorganic species were responsible for most of the hygroscopic growth. In addition, humidification factors at 85-90% RH increased for smoke with increasing inorganic salt to carbon ratios. Particle morphology as observed from scanning electron microscopy revealed that samples of hygroscopic particles contained soot chains either internally or externally mixed with inorganic potassium salts, while samples of weak to non-hygroscopic particles were dominated by soot and organic constituents. This study provides further understanding of the compounds responsible for water uptake by young biomass smoke, and is important for accurately assessing the role of smoke in climate change studies and visibility regulatory efforts.

  16. Measured and modeled humidification factors of fresh smoke particles from biomass burning: role of inorganic constituents

    NASA Astrophysics Data System (ADS)

    Hand, J. L.; Day, D. E.; McMeeking, G. M.; Levin, E. J. T.; Carrico, C. M.; Kreidenweis, S. M.; Malm, W. C.; Laskin, A.; Desyaterik, Y.

    2010-02-01

    During the 2006 FLAME study (Fire Laboratory at Missoula Experiment), laboratory burns of biomass fuels were performed to investigate the physico-chemical, optical, and hygroscopic properties of fresh biomass smoke. As part of the experiment, two nephelometers simultaneously measured dry and humidified light scattering coefficients (bsp(dry) and bsp(RH), respectively) in order to explore the role of relative humidity (RH) on the optical properties of biomass smoke aerosols. Results from burns of several biomass fuels showed large variability in the humidification factor (f(RH)=bsp(RH)/bsp(dry)). Values of f(RH) at RH=85-90% ranged from 1.02 to 2.15 depending on fuel type. We incorporated measured chemical composition and size distribution data to model the smoke hygroscopic growth to investigate the role of inorganic and organic compounds on water uptake for these aerosols. By assuming only inorganic constituents were hygroscopic, we were able to model the water uptake within experimental uncertainty, suggesting that inorganic species were responsible for most of the hygroscopic growth. In addition, humidification factors at 85-90% RH increased for smoke with increasing inorganic salt to carbon ratios. Particle morphology as observed from scanning electron microscopy revealed that samples of hygroscopic particles contained soot chains either internally or externally mixed with inorganic potassium salts, while samples of weak to non-hygroscopic particles were dominated by soot and organic constituents. This study provides further understanding of the compounds responsible for water uptake by young biomass smoke, and is important for accurately assessing the role of smoke in climate change studies and visibility regulatory efforts.

  17. Measured and modeled humidification factors of fresh smoke particles from biomass burning: role of inorganic constituents

    NASA Astrophysics Data System (ADS)

    Hand, J. L.; Day, D. E.; McMeeking, G. M.; Levin, E. J. T.; Carrico, C. M.; Kreidenweis, S. M.; Malm, W. C.; Laskin, A.; Desyaterik, Y.

    2010-07-01

    During the 2006 FLAME study (Fire Laboratory at Missoula Experiment), laboratory burns of biomass fuels were performed to investigate the physico-chemical, optical, and hygroscopic properties of fresh biomass smoke. As part of the experiment, two nephelometers simultaneously measured dry and humidified light scattering coefficients (bsp(dry) and bsp(RH), respectively) in order to explore the role of relative humidity (RH) on the optical properties of biomass smoke aerosols. Results from burns of several biomass fuels from the west and southeast United States showed large variability in the humidification factor (f(RH)=bsp(RH)/bsp(dry)). Values of f(RH) at RH=80-85% ranged from 0.99 to 1.81 depending on fuel type. We incorporated measured chemical composition and size distribution data to model the smoke hygroscopic growth to investigate the role of inorganic compounds on water uptake for these aerosols. By assuming only inorganic constituents were hygroscopic, we were able to model the water uptake within experimental uncertainty, suggesting that inorganic species were responsible for most of the hygroscopic growth. In addition, humidification factors at 80-85% RH increased for smoke with increasing inorganic salt to carbon ratios. Particle morphology as observed from scanning electron microscopy revealed that samples of hygroscopic particles contained soot chains either internally or externally mixed with inorganic potassium salts, while samples of weak to non-hygroscopic particles were dominated by soot and organic constituents. This study provides further understanding of the compounds responsible for water uptake by young biomass smoke, and is important for accurately assessing the role of smoke in climate change studies and visibility regulatory efforts.

  18. Philip Morris toxicological experiments with fresh sidestream smoke: more toxic than mainstream smoke

    PubMed Central

    Schick, S; Glantz, S

    2005-01-01

    Background: Exposure to secondhand smoke causes lung cancer; however, there are little data in the open literature on the in vivo toxicology of fresh sidestream cigarette smoke to guide the debate about smoke-free workplaces and public places. Objective: To investigate the unpublished in vivo research on sidestream cigarette smoke done by Philip Morris Tobacco Company during the 1980s at its Institut für Biologische Forschung (INBIFO). Methods: Analysis of internal tobacco industry documents now available at the University of California San Francisco Legacy Tobacco Documents Library and other websites. Results: Inhaled fresh sidestream cigarette smoke is approximately four times more toxic per gram total particulate matter (TPM) than mainstream cigarette smoke. Sidestream condensate is approximately three times more toxic per gram and two to six times more tumourigenic per gram than mainstream condensate by dermal application. The gas/vapour phase of sidestream smoke is responsible for most of the sensory irritation and respiratory tract epithelium damage. Fresh sidestream smoke inhibits normal weight gain in developing animals. In a 21day exposure, fresh sidestream smoke can cause damage to the respiratory epithelium at concentrations of 2 µg/l TPM. Damage to the respiratory epithelium increases with longer exposures. The toxicity of whole sidestream smoke is higher than the sum of the toxicities of its major constituents. Conclusion: Fresh sidestream smoke at concentrations commonly encountered indoors is well above a 2 µg/m3 reference concentration (the level at which acute effects are unlikely to occur), calculated from the results of the INBIFO studies, that defines acute toxicity to humans. Smoke-free public places and workplaces are the only practical way to protect the public health from the toxins in sidestream smoke. PMID:16319363

  19. The novel laparoscopic training 3D model in urology with surgical anatomic remarks: Fresh-frozen cadaveric tissue

    PubMed Central

    Huri, Emre; Ezer, Mehmet; Chan, Eddie

    2016-01-01

    Laparoscopic surgery is routinely used to treat many urological conditions and it is the gold standard treatment option for many surgeries such as radical nephrectomy. Due to the difficulty of learning, laparoscopic training should start outside the operating room. Although it is a very different model of laparoscopic training; the aim of this review is to show the value of human cadaveric model for laparoscopic training and present our experience in this area. Fresh frozen cadaveric model in laparoscopic training, dry lab, cadaveric model, animal models and computer based simulators are the most commonly used models for laparoscopic training. Cadaveric models mimic the live setting better than animal models. Also, it is the best way in demonstrating important anatomic landmarks like prostate, bladder, and pelvic lymph nodes templates. However, cadaveric training is expensive, and should be used by multiple disciplines for higher efficiency. The laparosopic cadaveric training starts with didactic lectures with introduction of pelvic surgical anatomy. It is followed by hands-on dissection. A typical pelvic dissection part can be completed in 6 hours. Surgical robot and some laparoscopy platforms are equipped with 3-D vision. In recent years, we have use the stereoscopic laparoscopy system for training purposes to show exact anatomic landmarks. Cadavers are removed from their containers 3 to 5 days prior to training session to allow enough time for thawing. Intracorporeal suturing is an important part of laparoscopic training. We believe that suturing must be practiced in the dry lab, which is significantly cheaper than cadaveric models. Cadaveric training model should focus on the anatomic dissection instead. In conclusion, fresh-frozen cadaveric sample is one of the best 3D simulation models for laparoscopic training purposes. Major aim of cadaveric training is not only mimicking the surgical technique but also teaching true anatomy. Lack of availability and higher

  20. Translational intracerebral hemorrhage: a need for transparent descriptions of fresh tissue sampling and preclinical model quality.

    PubMed

    Chang, Che-Feng; Cai, Li; Wang, Jian

    2015-10-01

    For years, strategies have been proposed to improve translational success in stroke research by improving the quality of animal studies. However, articles that report preclinical intracerebral hemorrhage (ICH) studies continue to lack adequate qualitative and quantitative descriptions of fresh brain tissue collection. They also tend to lack transparency about animal model quality. We conducted a systematic review of 82 ICH research articles to determine the level of detail reported for brain tissue collection. We found that only 24 (29 %) reported the volume, weight, or thickness of tissue collected and a specific description of the anatomical location. Thus, up to 71 % of preclinical ICH research articles did not properly define how fresh specimens were collected for biochemical measurements. Such omissions may impede reproducibility of results between laboratories. Although existing criteria have improved the quality of preclinical stroke studies, ICH researchers need to identify specific guidelines and strategies to avoid pitfalls, minimize bias, and increase reproducibility in this field.

  1. MODELING ASSUMPTIONS FOR THE ADVANCED TEST REACTOR FRESH FUEL SHIPPING CONTAINER

    SciTech Connect

    Rick J. Migliore

    2009-09-01

    The Advanced Test Reactor Fresh Fuel Shipping Container (ATR FFSC) is currently licensed per 10 CFR 71 to transport a fresh fuel element for either the Advanced Test Reactor, the University of Missouri Research Reactor (MURR), or the Massachusetts Institute of Technology Research Reactor (MITR-II). During the licensing process, the Nuclear Regulatory Commission (NRC) raised a number of issues relating to the criticality analysis, namely (1) lack of a tolerance study on the fuel and packaging, (2) moderation conditions during normal conditions of transport (NCT), (3) treatment of minor hydrogenous packaging materials, and (4) treatment of potential fuel damage under hypothetical accident conditions (HAC). These concerns were adequately addressed by modifying the criticality analysis. A tolerance study was added for both the packaging and fuel elements, full-moderation was included in the NCT models, minor hydrogenous packaging materials were included, and fuel element damage was considered for the MURR and MITR-II fuel types.

  2. Translational Intracerebral Hemorrhage: A Need for Transparent Descriptions of Fresh Tissue Sampling and Preclinical Model Quality

    PubMed Central

    Chang, Che-Feng; Cai, Li; Wang, Jian

    2015-01-01

    For years, strategies have been proposed to improve translational success in stroke research by improving the quality of animal studies. However, articles that report preclinical intracerebral hemorrhage (ICH) studies continue to lack adequate qualitative and quantitative descriptions of fresh brain tissue collection. They also tend to lack transparency about animal model quality. We conducted a systematic review of 82 ICH research articles to determine the level of detail reported for brain tissue collection. We found that only 24 (29%) reported the volume, weight, or thickness of tissue collected and a specific description of the anatomical location. Thus, up to 71% of preclinical ICH research articles did not properly define how fresh specimens were collected for biochemical measurements. Such omissions may impede reproducibility of results between laboratories. Although existing criteria have improved the quality of preclinical stroke studies, ICH researchers need to identify specific guidelines and strategies to avoid pitfalls, minimize bias, and increase reproducibility in this field. PMID:25907620

  3. Modelling the growth of Listeria monocytogenes in fresh green coconut (Cocos nucifera L.) water.

    PubMed

    Walter, Eduardo H M; Kabuki, Dirce Y; Esper, Luciana M R; Sant'Ana, Anderson S; Kuaye, Arnaldo Y

    2009-09-01

    The behaviour of Listeria monocytogenes in the fresh coconut water stored at 4 degrees C, 10 degrees C and 35 degrees C was studied. The coconut water was aseptically extracted from green coconuts (Cocos nucifera L.) and samples were inoculated in triplicate with a mixture of 5 strains of L. monocytogenes with a mean population of approximately 3 log(10) CFU/mL. The kinetic parameters of the bacteria were estimated from the Baranyi model, and compared with predictions of the Pathogen Modelling Program so as to predict its behaviour in the beverage. The results demonstrated that fresh green coconut water was a beverage propitious for the survival and growth of L. monocytogenes and that refrigeration at 10 degrees C or 4 degrees C retarded, but did not inhibit, growth of this bacterium. Temperature abuse at 35 degrees C considerably reduced the lagtimes. The study shows that L. monocytogenes growth in fresh green coconut water is controlled for several days by storage at low temperature, mainly at 4 degrees C. Thus, for risk population this product should only be drunk directly from the coconut or despite the sensorial alterations should be consumed pasteurized.

  4. Turbulence modeling and experiments

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir

    1992-01-01

    The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat

  5. Analysis of subcritical experiments using fresh and spent research reactor fuel assemblies

    NASA Astrophysics Data System (ADS)

    Zino, John Frederick

    1999-11-01

    This research investigated the concepts associated with crediting the burnup of spent nuclear fuel assemblies for the purposes of criticality safety. To accomplish this, a collaborative experimental research program was undertaken between Westinghouse, the University of Missouri Research Reactor (MURR) facility and Oak Ridge National Laboratory (ORNL). The purpose of the program was to characterize the subcritical behavior of a small array of fresh and spent MURR fuel assemblies using the 252Cf Source-driven noise technique. An aluminum test rig was built which was capable of holding up to four, highly enriched (93.15 wt.% 235U) MURR fuel assemblies in a 2 x 2 array. The rig was outfitted with one source and four detector drywells which allowed researchers to perform active neutron noise measurements on the array of fuel assemblies. The 1 atmosphere gas 3He neutron detectors used to perform the measurements were quenched with CF4 gas to allow improved discrimination of the neutron signals in the very high gamma-ray fields associated with spent fuel (˜8000 R/hr). In addition, the detector drywells were outfitted with 1″ lead collars to provide additional gamma-ray shielding from the spent fuel. Reactivity changes were induced in the subcritical lattice by replacing individual fresh assemblies (in a 4-assembly array) with spent assemblies of known, maximum burnup (143 Mw-D). The absolute and relative measured reactivity changes were then compared to those predicted by three-dimensional Monte Carlo calculations. The purpose of these comparisons was to investigate the accuracy of modern transport theory depletion calculations to accurately simulate the reactivity effects of burnup in spent nuclear fuel. A total of seven subcritical measurements were performed at the MURR reactor facility on July 20th and 27th, 1998. These measurements generated several estimates of prompt neutron decay constants (alpha) and ratios of spectral densities through frequency correlations

  6. Laparoscopic training model using fresh human cadavers without the establishment of penumoperitoneum

    PubMed Central

    Imakuma, Ernesto Sasaki; Ussami, Edson Yassushi; Meyer, Alberto

    2016-01-01

    BACKGROUND: Laparoscopy is a well-established alternative to open surgery for treating many diseases. Although laparoscopy has many advantages, it is also associated with disadvantages, such as slow learning curves and prolonged operation time. Fresh frozen cadavers may be an interesting resource for laparoscopic training, and many institutions have access to cadavers. One of the main obstacles for the use of cadavers as a training model is the difficulty in introducing a sufficient pneumoperitoneum to distend the abdominal wall and provide a proper working space. The purpose of this study was to describe a fresh human cadaver model for laparoscopic training without requiring a pneumoperitoneum. MATERIALS AND METHODS AND RESULTS: A fake abdominal wall device was developed to allow for laparoscopic training without requiring a pneumoperitoneum in cadavers. The device consists of a table-mounted retractor, two rail clamps, two independent frame arms, two adjustable handle and rotating features, and two frames of the abdominal wall. A handycam is fixed over a frame arm, positioned and connected through a USB connection to a television and dissector; scissors and other laparoscopic materials are positioned inside trocars. The laparoscopic procedure is thus simulated. CONCLUSION: Cadavers offer a very promising and useful model for laparoscopic training. We developed a fake abdominal wall device that solves the limitation of space when performing surgery on cadavers and removes the need to acquire more costly laparoscopic equipment. This model is easily accessible at institutions in developing countries, making it one of the most promising tools for teaching laparoscopy. PMID:27073318

  7. Shelf-life prediction models for ready-to-eat fresh cut salads: Testing in real cold chain.

    PubMed

    Tsironi, Theofania; Dermesonlouoglou, Efimia; Giannoglou, Marianna; Gogou, Eleni; Katsaros, George; Taoukis, Petros

    2017-01-02

    The aim of the study was to develop and test the applicability of predictive models for shelf-life estimation of ready-to-eat (RTE) fresh cut salads in realistic distribution temperature conditions in the food supply chain. A systematic kinetic study of quality loss of RTE mixed salad (lollo rosso lettuce-40%, lollo verde lettuce-45%, rocket-15%) packed under modified atmospheres (3% O2, 10% CO2, 87% N2) was conducted. Microbial population (total viable count, Pseudomonas spp., lactic acid bacteria), vitamin C, colour and texture were the measured quality parameters. Kinetic models for these indices were developed to determine the quality loss and calculate product remaining shelf-life (SLR). Storage experiments were conducted at isothermal (2.5-15°C) and non-isothermal temperature conditions (Teff=7.8°C defined as the constant temperature that results in the same quality value as the variable temperature distribution) for validation purposes. Pseudomonas dominated spoilage, followed by browning and chemical changes. The end of shelf-life correlated with a Pseudomonas spp. level of 8 log(cfu/g), and 20% loss of the initial vitamin C content. The effect of temperature on these quality parameters was expressed by the Arrhenius equation; activation energy (Ea) value was 69.1 and 122.6kJ/mol for Pseudomonas spp. growth and vitamin C loss rates, respectively. Shelf-life prediction models were also validated in real cold chain conditions (including the stages of transport to and storage at retail distribution center, transport to and display at 7 retail stores, transport to and storage in domestic refrigerators). The quality level and SLR estimated after 2-3days of domestic storage (time of consumption) ranged between 1 and 8days at 4°C and was predicted within satisfactory statistical error by the kinetic models. Teff in the cold chain ranged between 3.7 and 8.3°C. Using the validated models, SLR of RTE fresh cut salad can be estimated at any point of the cold chain

  8. Genomic characterization of explant tumorgraft models derived from fresh patient tumor tissue

    PubMed Central

    2012-01-01

    Background There is resurgence within drug and biomarker development communities for the use of primary tumorgraft models as improved predictors of patient tumor response to novel therapeutic strategies. Despite perceived advantages over cell line derived xenograft models, there is limited data comparing the genotype and phenotype of tumorgrafts to the donor patient tumor, limiting the determination of molecular relevance of the tumorgraft model. This report directly compares the genomic characteristics of patient tumors and the derived tumorgraft models, including gene expression, and oncogenic mutation status. Methods Fresh tumor tissues from 182 cancer patients were implanted subcutaneously into immune-compromised mice for the development of primary patient tumorgraft models. Histological assessment was performed on both patient tumors and the resulting tumorgraft models. Somatic mutations in key oncogenes and gene expression levels of resulting tumorgrafts were compared to the matched patient tumors using the OncoCarta (Sequenom, San Diego, CA) and human gene microarray (Affymetrix, Santa Clara, CA) platforms respectively. The genomic stability of the established tumorgrafts was assessed across serial in vivo generations in a representative subset of models. The genomes of patient tumors that formed tumorgrafts were compared to those that did not to identify the possible molecular basis to successful engraftment or rejection. Results Fresh tumor tissues from 182 cancer patients were implanted into immune-compromised mice with forty-nine tumorgraft models that have been successfully established, exhibiting strong histological and genomic fidelity to the originating patient tumors. Comparison of the transcriptomes and oncogenic mutations between the tumorgrafts and the matched patient tumors were found to be stable across four tumorgraft generations. Not only did the various tumors retain the differentiation pattern, but supporting stromal elements were preserved

  9. Numerical modelling and hydrochemical characterisation of a fresh-water lens in the Belgian coastal plain

    NASA Astrophysics Data System (ADS)

    Vandenbohede, A.; Lebbe, L.

    2002-05-01

    The distribution of fresh and salt water in coastal aquifers is influenced by many processes. The influence of aquifer heterogeneity and human interference such as land reclamation is illustrated in the Belgian coastal plain where, around A.D. 1200, the reclamation of a tidally influenced environment was completed. The aquifer, which was filled with salt water, was thereafter freshened. The areal distribution of peat, clay, silt and sand influences the general flow and distribution of fresh and salt water along with the drainage pattern and results in the development of fresh-water lenses. The water quality in and around the fresh-water lenses below an inverted tidal channel ridge is surveyed. The hydrochemical evolution of the fresh water lens is reconstructed, pointing to cation exchange, solution of calcite and the oxidation of organic material as the major chemical reactions. The formation and evolution of the fresh water lens is modelled using a two-dimensional density-dependent solute transport model and the sensitivity of drainage and conductivities are studied. Drainage level mainly influences the depth of the fresh-water lens, whereas the time of formation is mainly influenced by conductivity. Résumé. La répartition de l'eau douce et de l'eau salée dans les aquifères littoraux est influencée par de nombreux mécanismes. L'influence de l'hétérogénéité de l'aquifère et des interférences anthropiques telles que la mise en valeur des terres est illustrée par la plaine côtière belge où, depuis l'an 1200, on a mis en valeur un environnement soumis aux marées. L'aquifère, qui contenait de l'eau salée, contient maintenant de l'eau douce. La distribution spatiale de tourbe, d'argile, de silt et de sable joue un rôle dans l'écoulement général et dans la répartition de l'eau douce et de l'eau salée le long du réseau de drainage et produit des lentilles d'eau douce. La qualité de l'eau dans et autour des lentilles d'eau douce sous une lev

  10. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    NASA Astrophysics Data System (ADS)

    de Lange, W. J.

    2014-05-01

    Wim J. de Lange, Geert F. Prinsen, Jacco H. Hoogewoud, Ab A Veldhuizen, Joachim Hunink, Erik F.W. Ruijgh, Timo Kroon Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses

  11. Finite-difference model to simulate the areal flow of saltwater and fresh water separated by an interface

    USGS Publications Warehouse

    Mercer, James W.; Larson, S.P.; Faust, Charles R.

    1980-01-01

    Model documentation is presented for a two-dimensional (areal) model capable of simulating ground-water flow of salt water and fresh water separated by an interface. The partial differential equations are integrated over the thicknesses of fresh water and salt water resulting in two equations describing the flow characteristics in the areal domain. These equations are approximated using finite-difference techniques and the resulting algebraic equations are solved for the dependent variables, fresh water head and salt water head. An iterative solution method was found to be most appropriate. The program is designed to simulate time-dependent problems such as those associated with the development of coastal aquifers, and can treat water-table conditions or confined conditions with steady-state leakage of fresh water. The program will generally be most applicable to the analysis of regional aquifer problems in which the zone between salt water and fresh water can be considered a surface (sharp interface). Example problems and a listing of the computer code are included. (USGS).

  12. Modelling the response of fresh groundwater to climate and vegetation changes in coral islands

    NASA Astrophysics Data System (ADS)

    Comte, Jean-Christophe; Join, Jean-Lambert; Banton, Olivier; Nicolini, Eric

    2014-12-01

    In coral islands, groundwater is a crucial freshwater resource for terrestrial life, including human water supply. Response of the freshwater lens to expected climate changes and subsequent vegetation alterations is quantified for Grande Glorieuse, a low-lying coral island in the Western Indian Ocean. Distributed models of recharge, evapotranspiration and saltwater phytotoxicity are integrated into a variable-density groundwater model to simulate the evolution of groundwater salinity. Model results are assessed against field observations including groundwater and geophysical measurements. Simulations show the major control currently exerted by the vegetation with regards to the lens morphology and the high sensitivity of the lens to climate alterations, impacting both quantity and salinity. Long-term changes in mean sea level and climatic conditions (rainfall and evapotranspiration) are predicted to be responsible for an average increase in salinity approaching 140 % (+8 kg m-3) when combined. In low-lying areas with high vegetation density, these changes top +300 % (+10 kg m-3). However, due to salinity increase and its phytotoxicity, it is shown that a corollary drop in vegetation activity can buffer the alteration of fresh groundwater. This illustrates the importance of accounting for vegetation dynamics to study groundwater in coral islands.

  13. The Renewal of Fresh Water Through Treatment Wetlands: A Global Model

    NASA Astrophysics Data System (ADS)

    Bradford, M.; Fraser, L.; Steer, D.

    2001-05-01

    We present a global model that incorporates population growth, freshwater supply, dam construction, and, most importantly, an estimate of the returns of freshwater by the treatment of wastewater through wetlands. By including treatment wetlands within the global hydrological cycle, we estimate that humans can reduce the annual appropriation of freshwater in 2025 to 52%, even with the assumption that per capita rate of water use is increasing. The diminishing world supply and availability of freshwater is forcing a general re-assessment of freshwater use and the treatment of wastewater. Even with an increase in the construction of dams, the rate of population growth and the projected amount of future water use far surpasses the rate of supply. An earlier global model estimated that humans appropriated 52% of the annual accessible freshwater in 1990, and would appropriate at least 72% in 2025. This alarming figure is actually a very conservative estimate considering that the per capita rate of increase of water use is increasing. If per capita rate of increase was accounted for in the model, 99% of accessible freshwater will be appropriated in 2025. The use of constructed and natural wetlands for the treatment of wastewater may be an effective worldwide tool to help address the problem of fresh water availability.

  14. Fresh Kills leachate treatment and minimization study: Volume 2, Modeling, monitoring and evaluation. Final report

    SciTech Connect

    Fillos, J.; Khanbilvardi, R.

    1993-09-01

    The New York City Department of Sanitation is developing a comprehensive landfill leachate management plan for the Fresh Kills landfill, located on the western shore of Staten Island, New York. The 3000-acre facility, owned and operated by the City of New York, has been developed into four distinct mounds that correspond to areas designated as Sections 1/9, 2/8, 3/4 and 6/7. In developing a comprehensive leachate management plan, the estimating leachate flow rates is important in designing appropriate treatment alternatives to reduce the offsite migration that pollutes both surface water and groundwater resources.Estimating the leachate flow rates from Sections 1/9 and 6/7 was given priority using an available model, hydrologic evaluation of landfill performance (HELP), and a new model, flow investigation for landfill leachate (FILL). The field-scale analysis for leachate flow included data collection of the leachate mound-level from piezometers and monitoring wells installed on-site, for six months period. From the leachate mound-head contours and flow-gradients, Leachate flow rates were computed using Darcy`s Law.

  15. Litchi freshness rapid non-destructive evaluating method using electronic nose and non-linear dynamics stochastic resonance model.

    PubMed

    Ying, Xiaoguo; Liu, Wei; Hui, Guohua

    2015-01-01

    In this paper, litchi freshness rapid non-destructive evaluating method using electronic nose (e-nose) and non-linear stochastic resonance (SR) was proposed. EN responses to litchi samples were continuously detected for 6 d Principal component analysis (PCA) and non-linear stochastic resonance (SR) methods were utilized to analyze EN detection data. PCA method could not totally discriminate litchi samples, while SR signal-to-noise ratio (SNR) eigen spectrum successfully discriminated all litchi samples. Litchi freshness predictive model developed using SNR eigen values shows high predictive accuracy with regression coefficients R(2) = 0 .99396.

  16. Numerical Modeling Experiments

    DTIC Science & Technology

    1974-09-01

    presence of clouds is associated with the occurvence of condensation in the atmospheric models. Cloudiness 3t a particulat grid point is introduced -4...when saturation is predicted as a result of either large-scale moisture flux convergence or vertical convective adjustment. In most models such clouds ... cloud top, cloud thickness, and liquid-water content. In some general circulation models the local fractional convective cloud amountv tre taken

  17. A production planning model considering uncertain demand using two-stage stochastic programming in a fresh vegetable supply chain context.

    PubMed

    Mateo, Jordi; Pla, Lluis M; Solsona, Francesc; Pagès, Adela

    2016-01-01

    Production planning models are achieving more interest for being used in the primary sector of the economy. The proposed model relies on the formulation of a location model representing a set of farms susceptible of being selected by a grocery shop brand to supply local fresh products under seasonal contracts. The main aim is to minimize overall procurement costs and meet future demand. This kind of problem is rather common in fresh vegetable supply chains where producers are located in proximity either to processing plants or retailers. The proposed two-stage stochastic model determines which suppliers should be selected for production contracts to ensure high quality products and minimal time from farm-to-table. Moreover, Lagrangian relaxation and parallel computing algorithms are proposed to solve these instances efficiently in a reasonable computational time. The results obtained show computational gains from our algorithmic proposals in front of the usage of plain CPLEX solver. Furthermore, the results ensure the competitive advantages of using the proposed model by purchase managers in the fresh vegetables industry.

  18. We Think You Need a Vacation...: The Discipline Model at Fresh Youth Initiatives

    ERIC Educational Resources Information Center

    Afterschool Matters, 2003

    2003-01-01

    Fresh Youth Initiative (FYI) is a youth development organization based in the Washington Heights-Inwood section of Manhattan. The group's mission is to support and encourage the efforts of neighborhood young people and their families to design and carry out community service and social action projects, develop leadership skills, fulfill their…

  19. CFD modeling to improve safe and efficient distribution of chlorine dioxide gas for packaging fresh produce

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The efficiency of the packaging system in inactivating food borne pathogens and prolonging the shelf life of fresh-cut produce is influenced by the design of the package apart from material and atmospheric conditions. Three different designs were considered to determine a specific package design ens...

  20. A new conceptual model on the fate and controls of fresh and pyrolized plant litter decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The leaching of dissolved organic matter (DOM) from fresh and pyrolyzed aboveground plant inputs to the soil is a major pathway by which decomposing aboveground plant material contributes to soil organic matter formation. Understanding how aboveground plant input chemical traits control the partiti...

  1. An artificial intelligence approach for modeling volume and fresh weight of callus - A case study of cumin (Cuminum cyminum L.).

    PubMed

    Mansouri, Ali; Fadavi, Ali; Mortazavian, Seyed Mohammad Mahdi

    2016-05-21

    Cumin (Cuminum cyminum Linn.) is valued for its aroma and its medicinal and therapeutic properties. A supervised feedforward artificial neural network (ANN) trained with back propagation algorithms, was applied to predict fresh weight and volume of Cuminum cyminum L. calli. Pearson correlation coefficient was used to evaluate input/output dependency of the eleven input parameters. Area, feret diameter, minor axis length, perimeter and weighted density parameters were chosen as input variables. Different training algorithms, transfer functions, number of hidden nodes and training iteration were studied to find out the optimum ANN structure. The network with conjugate gradient fletcher-reeves (CGF) algorithm, tangent sigmoid transfer function, 17 hidden nodes and 2000 training epochs was selected as the final ANN model. The final model was able to predict the fresh weight and volume of calli more precisely relative to multiple linear models. The results were confirmed by R(2)≥0.89, R(i)≥0.94 and T value ≥0.86. The results for both volume and fresh weight values showed that almost 90% of data had an acceptable absolute error of ±5%.

  2. Modelling spoilage of fresh turbot and evaluation of a time-temperature integrator (TTI) label under fluctuating temperature.

    PubMed

    Nuin, Maider; Alfaro, Begoña; Cruz, Ziortza; Argarate, Nerea; George, Susie; Le Marc, Yvan; Olley, June; Pin, Carmen

    2008-10-31

    Kinetic models were developed to predict the microbial spoilage and the sensory quality of fresh fish and to evaluate the efficiency of a commercial time-temperature integrator (TTI) label, Fresh Check(R), to monitor shelf life. Farmed turbot (Psetta maxima) samples were packaged in PVC film and stored at 0, 5, 10 and 15 degrees C. Microbial growth and sensory attributes were monitored at regular time intervals. The response of the Fresh Check device was measured at the same temperatures during the storage period. The sensory perception was quantified according to a global sensory indicator obtained by principal component analysis as well as to the Quality Index Method, QIM, as described by Rahman and Olley [Rahman, H.A., Olley, J., 1984. Assessment of sensory techniques for quality assessment of Australian fish. CSIRO Tasmanian Regional Laboratory. Occasional paper n. 8. Available from the Australian Maritime College library. Newnham. Tasmania]. Both methods were found equally valid to monitor the loss of sensory quality. The maximum specific growth rate of spoilage bacteria, the rate of change of the sensory indicators and the rate of change of the colour measurements of the TTI label were modelled as a function of temperature. The temperature had a similar effect on the bacteria, sensory and Fresh Check kinetics. At the time of sensory rejection, the bacterial load was ca. 10(5)-10(6) cfu/g. The end of shelf life indicated by the Fresh Check label was close to the sensory rejection time. The performance of the models was validated under fluctuating temperature conditions by comparing the predicted and measured values for all microbial, sensory and TTI responses. The models have been implemented in a Visual Basic add-in for Excel called "Fish Shelf Life Prediction (FSLP)". This program predicts sensory acceptability and growth of spoilage bacteria in fish and the response of the TTI at constant and fluctuating temperature conditions. The program is freely

  3. Introducing a Fresh Cadaver Model for Ultrasound-guided Central Venous Access Training in Undergraduate Medical Education

    PubMed Central

    Miller, Ryan; Ho, Hang; Ng, Vivienne; Tran, Melissa; Rappaport, Douglas; Rappaport, William J.A.; Dandorf, Stewart J.; Dunleavy, James; Viscusi, Rebecca; Amini, Richard

    2016-01-01

    Introduction Over the past decade, medical students have witnessed a decline in the opportunities to perform technical skills during their clinical years. Ultrasound-guided central venous access (USG-CVA) is a critical procedure commonly performed by emergency medicine, anesthesia, and general surgery residents, often during their first month of residency. However, the acquisition of skills required to safely perform this procedure is often deficient upon graduation from medical school. To ameliorate this lack of technical proficiency, ultrasound simulation models have been introduced into undergraduate medical education to train venous access skills. Criticisms of simulation models are the innate lack of realistic tactile qualities, as well as the lack of anatomical variances when compared to living patients. The purpose of our investigation was to design and evaluate a life-like and reproducible training model for USG-CVA using a fresh cadaver. Methods This was a cross-sectional study at an urban academic medical center. An 18-point procedural knowledge tool and an 18-point procedural skill evaluation tool were administered during a cadaver lab at the beginning and end of the surgical clerkship. During the fresh cadaver lab, procedure naïve third-year medical students were trained on how to perform ultrasound-guided central venous access of the femoral and internal jugular vessels. Preparation of the fresh cadaver model involved placement of a thin-walled latex tubing in the anatomic location of the femoral and internal jugular vein respectively. Results Fifty-six third-year medical students participated in this study during their surgical clerkship. The fresh cadaver model provided high quality and lifelike ultrasound images despite numerous cannulation attempts. Technical skill scores improved from an average score of 3 to 12 (p<0.001) and procedural knowledge scores improved from an average score of 4 to 8 (p<0.001). Conclusion The use of this novel cadaver

  4. Development of a hierarchical Bayesian model to estimate the growth parameters of Listeria monocytogenes in minimally processed fresh leafy salads.

    PubMed

    Crépet, Amélie; Stahl, Valérie; Carlin, Frédéric

    2009-05-31

    The optimal growth rate mu(opt) of Listeria monocytogenes in minimally processed (MP) fresh leafy salads was estimated with a hierarchical Bayesian model at (mean+/-standard deviation) 0.33+/-0.16 h(-1). This mu(opt) value was much lower on average than that in nutrient broth, liquid dairy, meat and seafood products (0.7-1.3 h(-1)), and of the same order of magnitude as in cheese. Cardinal temperatures T(min), T(opt) and T(max) were determined at -4.5+/-1.3 degrees C, 37.1+/-1.3 degrees C and 45.4+/-1.2 degrees C respectively. These parameters were determined from 206 growth curves of L. monocytogenes in MP fresh leafy salads (lettuce including iceberg lettuce, broad leaf endive, curly leaf endive, lamb's lettuce, and mixtures of them) selected in the scientific literature and in technical reports. The adequacy of the model was evaluated by comparing observed data (bacterial concentrations at each experimental time for the completion of the 206 growth curves, mean log(10) increase at selected times and temperatures, L. monocytogenes concentrations in naturally contaminated MP iceberg lettuce) with the distribution of the predicted data generated by the model. The sensitivity of the model to assumptions about the prior values also was tested. The observed values mostly fell into the 95% credible interval of the distribution of predicted values. The mu(opt) and its uncertainty determined in this work could be used in quantitative microbial risk assessment for L. monocytogenes in minimally processed fresh leafy salads.

  5. Extension of the prognostic model of sea surface temperature to rain-induced cool and fresh lenses

    NASA Astrophysics Data System (ADS)

    Bellenger, Hugo; Drushka, Kyla; Asher, William; Reverdin, Gilles; Katsumata, Masaki; Watanabe, Michio

    2017-01-01

    The Zeng and Beljaars (2005) sea surface temperature prognostic scheme, developed to represent diurnal warming, is extended to represent rain-induced freshening and cooling. Effects of rain on salinity and temperature in the molecular skin layer (first few hundred micrometers) and the near-surface turbulent layer (first few meters) are separately parameterized by taking into account rain-induced fluxes of sensible heat and freshwater, surface stress, and mixing induced by droplets penetrating the water surface. Numerical results from this scheme are compared to observational data from two field studies of near-surface ocean stratifications caused by rain, to surface drifter observations and to previous computations with an idealized ocean mixed layer model, demonstrating that the scheme produces temperature variations consistent with in situ observations and model results. It reproduces the dependency of salinity on wind and rainfall rate and the lifetime of fresh lenses. In addition, the scheme reproduces the observed lag between temperature and salinity minimum at low wind speed and is sensitive to the peak rain rate for a given amount of rain. Finally, a first assessment of the impact of these fresh lenses on ocean surface variability is given for the near-equatorial western Pacific. In particular, the variability due to the mean rain-induced cooling is comparable to the variability due to the diurnal warming so that they both impact large-scale horizontal surface temperature gradients. The present parameterization can be used in a variety of models to study the impact of rain-induced fresh and cool lenses at different spatial and temporal scales.

  6. Using density difference to store fresh water in saline subsurface

    NASA Astrophysics Data System (ADS)

    van Ginkel, M.; Olsthoorn, Th. N.; des Tombe, B.

    2012-04-01

    The storage of fresh water in the subsurface for later recovery and use (Aquifer Storage and Recovery) is becoming more and more important in the coming decades for seasonal or emergency storage, especially in the light of climate change and increasing population. However, fresh water storage in a saline subsurface poses a challenge: the initially vertical interface between injected fresh and native salt water is unstable and tends to rotate. The injected fresh water tends to float upward on top of native salt water, where it becomes hard or impossible to recover at a later stage. A wide body of literature exists about this buoyancy effect that is caused by the density difference between fresh and salt water. Yet, very few papers focus on solutions to this problem. In this paper we propose a storage principle to overcome this buoyancy problem by actually using the density difference to keep the fresh water in place, by combining salt water extraction and impermeable barriers. This technique seems promising and could solve many local fresh water storage problems. It is especially applicable in shallow water table aquifers for the storage of fresh water below parks and arable land or for seasonal storage of desalinated water. We performed laboratory-scale experiments and numerical modelling to study the dynamic behaviour of a fresh water bubble stored in saline subsurface using the technique of salt water extraction and impermeable barriers; including effects of operation dynamics, groundwater flow, and diffusion, dispersion and density differences.

  7. A comparison of the coupled fresh water-salt water flow and the Ghyben-Herzberg sharp interface approaches to modeling of transient behavior in coastal aquifer systems

    USGS Publications Warehouse

    Essaid, H.I.

    1986-01-01

    A quasi-three dimensional finite difference model which simulates coupled, fresh water and salt water flow, separated by a sharp interface, is used to investigate the effects of storage characteristics, transmissivity, boundary conditions and anisotropy on the transient responses of such flow systems. The magnitude and duration of the departure of aquifer response from the behavior predicted using the Ghyben-Herzberg, one-fluid approach is a function of the ease with which flow can be induced in the salt water region. In many common hydrogeologic settings short-term fresh water head responses, and transitional responses between short-term and long-term, can only be realistically reproduced by including the effects of salt water flow on the dynamics of coastal flow systems. The coupled fresh water-salt water flow modeling approach is able to reproduce the observed annual fresh water head response of the Waialae aquifer of southeastern Oahu, Hawaii. ?? 1986.

  8. Application of the distributed activation energy model to the kinetic study of pyrolysis of the fresh water algae Chlorococcum humicola.

    PubMed

    Kirtania, Kawnish; Bhattacharya, Sankar

    2012-03-01

    Apart from capturing carbon dioxide, fresh water algae can be used to produce biofuel. To assess the energy potential of Chlorococcum humicola, the alga's pyrolytic behavior was studied at heating rates of 5-20K/min in a thermobalance. To model the weight loss characteristics, an algorithm was developed based on the distributed activation energy model and applied to experimental data to extract the kinetics of the decomposition process. When the kinetic parameters estimated by this method were applied to another set of experimental data which were not used to estimate the parameters, the model was capable of predicting the pyrolysis behavior, in the new set of data with a R(2) value of 0.999479. The slow weight loss, that took place at the end of the pyrolysis process, was also accounted for by the proposed algorithm which is capable of predicting the pyrolysis kinetics of C. humicola at different heating rates.

  9. Methodology for modeling the disinfection efficiency of fresh-cut leafy vegetables wash water applied on peracetic acid combined with lactic acid.

    PubMed

    Van Haute, S; López-Gálvez, F; Gómez-López, V M; Eriksson, Markus; Devlieghere, F; Allende, Ana; Sampers, I

    2015-09-02

    A methodology to i) assess the feasibility of water disinfection in fresh-cut leafy greens wash water and ii) to compare the disinfectant efficiency of water disinfectants was defined and applied for a combination of peracetic acid (PAA) and lactic acid (LA) and comparison with free chlorine was made. Standardized process water, a watery suspension of iceberg lettuce, was used for the experiments. First, the combination of PAA+LA was evaluated for water recycling. In this case disinfectant was added to standardized process water inoculated with Escherichia coli (E. coli) O157 (6logCFU/mL). Regression models were constructed based on the batch inactivation data and validated in industrial process water obtained from fresh-cut leafy green processing plants. The UV254(F) was the best indicator for PAA decay and as such for the E. coli O157 inactivation with PAA+LA. The disinfection efficiency of PAA+LA increased with decreasing pH. Furthermore, PAA+LA efficacy was assessed as a process water disinfectant to be used within the washing tank, using a dynamic washing process with continuous influx of E. coli O157 and organic matter in the washing tank. The process water contamination in the dynamic process was adequately estimated by the developed model that assumed that knowledge of the disinfectant residual was sufficient to estimate the microbial contamination, regardless the physicochemical load. Based on the obtained results, PAA+LA seems to be better suited than chlorine for disinfecting process wash water with a high organic load but a higher disinfectant residual is necessary due to the slower E. coli O157 inactivation kinetics when compared to chlorine.

  10. [Study on modeling method of total viable count of fresh pork meat based on hyperspectral imaging system].

    PubMed

    Wang, Wei; Peng, Yan-Kun; Zhang, Xiao-Li

    2010-02-01

    Once the total viable count (TVC) of bacteria in fresh pork meat exceeds a certain number, it will become pathogenic bacteria. The present paper is to explore the feasibility of hyperspectral imaging technology combined with relevant modeling method for the prediction of TVC in fresh pork meat. For the certain kind of problem that has remarkable nonlinear characteristic and contains few samples, as well as the problem that has large amount of data used to express the information of spectrum and space dimension, it is crucial to choose a logical modeling method in order to achieve good prediction result. Based on the comparative result of partial least-squares regression (PLSR), artificial neural networks (ANNs) and least square support vector machines (LS-SVM), the authors found that the PLSR method was helpless for nonlinear regression problem, and the ANNs method couldn't get approving prediction result for few samples problem, however the prediction models based on LS-SVM can give attention to the little training error and the favorable generalization ability as soon as possible, and can make them well synchronously. Therefore LS-SVM was adopted as the modeling method to predict the TVC of pork meat. Then the TVC prediction model was constructed using all the 512 wavelength data acquired by the hyperspectral imaging system. The determination coefficient between the TVC obtained with the standard plate count for bacterial colonies method and the LS-SVM prediction result was 0.987 2 and 0.942 6 for the samples of calibration set and prediction set respectively, also the root mean square error of calibration (RMSEC) and the root mean square error of prediction (RMSEP) was 0.207 1 and 0.217 6 individually, and the result was considerably better than that of MLR, PLSR and ANNs method. This research demonstrates that using the hyperspectral imaging system coupled with the LS-SVM modeling method is a valid means for quick and nondestructive determination of TVC of pork

  11. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    NASA Astrophysics Data System (ADS)

    de Lange, Wim; Prinsen, Geert.; Hoogewoud, Jacco; Veldhuizen, Ab; Ruijgh, Erik; Kroon, Timo

    2013-04-01

    Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with? by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses. Regional expertise is welcomed in the calibration phase of NHI. It aims to reduce uncertainties by improving the

  12. A Fresh Look at Flooring Costs. A Report on a Survey of User Experience Compiled by Armstrong Cork Company.

    ERIC Educational Resources Information Center

    Armstrong Cork Co., Lancaster, PA.

    Survey information based on actual flooring installations in several types of buildings and traffic conditions, representing nearly 113 million square feet of actual user experience, is contained in this comprehensive report compiled by the Armstrong Cork Company. The comparative figures provided by these users clearly establish that--(1) the…

  13. Reprogramming somatic cells to pluripotency: a fresh look at Yamanaka's model.

    PubMed

    Li, Yangxin; Shen, Zhenya; Shelat, Harnath; Geng, Yong-Jian

    2013-12-01

    In 2006, Dr Shinya Yamanaka succeeded to reprogram somatic cells into pluripotent stem cells (iPSC) by delivering the genes encoding Oct4, Sox2, Klf4, and c-Myc. This achievement represents a fundamental breakthrough in stem cell biology and opens up a new era in regenerative medicine. However, the molecular processes by which somatic cells are reprogrammed into iPSC remain poorly understood. In 2009, Yamanaka proposed the elite and stochastic models for reprogramming mechanisms. To date, many investigators in the field of iPSC research support the concept of stochastic model, i.e., somatic cell reprogramming is an event of epigenetic transformation. A mathematical model, f (Cd, k), has also been proposed to predict the stochastic process. Here we wish to revisit the Yamanaka model and summarize the recent advances in this research field.

  14. On the Gause predator-prey model with a refuge: a fresh look at the history.

    PubMed

    Křivan, Vlastimil

    2011-04-07

    This article re-analyses a prey-predator model with a refuge introduced by one of the founders of population ecology Gause and his co-workers to explain discrepancies between their observations and predictions of the Lotka-Volterra prey-predator model. They replaced the linear functional response used by Lotka and Volterra by a saturating functional response with a discontinuity at a critical prey density. At concentrations below this critical density prey were effectively in a refuge while at a higher densities they were available to predators. Thus, their functional response was of the Holling type III. They analyzed this model and predicted existence of a limit cycle in predator-prey dynamics. In this article I show that their model is ill posed, because trajectories are not well defined. Using the Filippov method, I define and analyze solutions of the Gause model. I show that depending on parameter values, there are three possibilities: (1) trajectories converge to a limit cycle, as predicted by Gause, (2) trajectories converge to an equilibrium, or (3) the prey population escapes predator control and grows to infinity.

  15. A fresh view of cosmological models describing very early universe: General solution of the dynamical equations

    NASA Astrophysics Data System (ADS)

    Filippov, A. T.

    2017-03-01

    The dynamics of any spherical cosmology with a scalar field (`scalaron') coupling to gravity is described by the nonlinear second-order differential equations for two metric functions and the scalaron depending on the `time' parameter. The equations depend on the scalaron potential and on arbitrary gauge function that describes time parameterizations. This dynamical system can be integrated for flat, isotropic models with very special potentials. But, somewhat unexpectedly, replacing the independent variable t by one of the metric functions allows us to completely integrate the general spherical theory in any gauge and with arbitrary potentials. In this approach, inflationary solutions can be easily identified, explicitly derived, and compared to the standard approximate expressions. This approach is also applicable to intrinsically anisotropic models with a massive vector field (`vecton') as well as to some non-inflationary models.

  16. Harmonized, distributed and nation wide modelling of Nitrogen retention in Danish surface fresh waters

    NASA Astrophysics Data System (ADS)

    Thodsen, Hans; Larsen, Søren E.; Windolf, Jørgen; Bering Ovesen, Niels; Bøgestrand, Jens; Kronvang, Brian

    2010-05-01

    According to the EU Water Framework Directive all freshwater bodies must obtain good ecological status by 2015. In Denmark this means that all lakes with a surface area above 5 ha must be evaluated individually and mitigation measures must be enforced if the ecological status is below "good". In consequence, the nutrient pressures from point and diffuse sources must be assessed based on a quantification of the nutrient loading of each lake. In this study we focus on the loading of nitrogen. Surface water Nitrogen retention is an important parameter in loading estimations of nitrogen to lakes and marine areas. Estimations of the cost, of reducing Nitrogen loadings also largely depends on calculations of surface water retention as large percentages of the load can be removed/ retained in surface waters. Especially the presents of larger lakes on the river network can make a large difference between the loads from different catchments. A standardised calculation on annual (1990 - 2008) Nret percentages has been carried out for all Danish lakes larger than 5 hectares attached to a river network (591 lakes). The Nret calculation is based on water residence time calculations from each lake. A national 3D hydrological model, covering all major parts of the country estimated runoff for lake catchments. The diffuse nitrogen input to each lake was simulated with an empirical nitrogen load model. Where lakes are located upstream/ downstream of each other, a calculation chain involving the nitrogen retention in lakes was created. Harmonized national calculations of river nitrogen retention are carried out on the basis of river length and river width information and information on rivers in forested areas. Each river class is given a specific retention pr. unit area. The total average (1990 - 2008) Nitrogen load to Danish surface waters is modelled to 99000 t/yr. The total surface water retention is estimated to 23700 t/yr (24%). Of the surface water retention, 35% origins from

  17. Growth characteristics and development of a predictive model for Bacillus cereus in fresh wet noodles with added ethanol and thiamine.

    PubMed

    Kim, Bo-Yeon; Lee, Ji-Young; Ha, Sang-Do

    2011-04-01

    Response surface methodology was used to determine growth characteristics and to develop a predictive model to describe specific growth rates of Bacillus cereus in wet noodles containing a combination of ethanol (0 to 2% [vol/wt]) and vitamin B(1) (0 to 2 g/liter). B. cereus F4810/72, which produces an emetic toxin, was used in this study. The noodles containing B. cereus were incubated at 10°C. The growth curves were fitted to the modified Gompertz equation using nonlinear regression, and the growth rate values from the curves were used to establish the predictive model using a response surface methodology quadratic polynomial equation as a function of concentrations of ethanol and vitamin B(1). The model was shown to fit the data very well (r(2) = 0.9505 to 0.9991) and could be used to accurately predict growth rates. The quadratic polynomial model was validated, and the predicted growth rate values were in good agreement with the experimental values. The polynomial model was found to be an appropriate secondary model for growth rate (GR) and lag time (LT) based on the correlation of determination (r(2) = 0.9899 for GR, 0.9782 for LT), bias factor (B(f) = 1.006 for GR, 0.992 for LT), and accuracy factor (A(f) = 1.024 for GR, 1.011 for LT). Thus, this model holds great promise for use in predicting the growth of B. cereus in fresh wet noodles using only the bacterial concentration, an important contribution to the manufacturing of safe products.

  18. A Fresh Look at the Classical Approach to Homogeneous Solid Propellant Combustion Modeling

    DTIC Science & Technology

    1982-10-01

    CLASSIFICATION OF THIS PAGE (TWien Data Bntarod) REPORT DOCUMENTATION PAGE READ INSTRUCTIONS BEFORE COMPLETING FORM 1. REPORT NUMBER Technical...block number) Solid propellant Temperature sensitivity Combustion Modeling Pressure index 20. ABSTRACT fCbrrtfiiue aa rmraram «M» ft...pressure index is exactly v/2 so that the solid regression rate has much the same character over at least some pressure range. In general the

  19. Response of the CNRM-CM5 coupled model to an enhanced Greenland fresh water flux

    NASA Astrophysics Data System (ADS)

    Rogel, P.; Hamon, M.

    2013-12-01

    We investigate the transient response of the CNRM-CM5 coupled Ocean-Atmosphere model to a strong freshwater forcing around the Greenland coasts. We perturb a 50-year long ensemble of simulation with high-emission of Greenhouse gas (GHG) scenario (RCP8.5). The 5 members of the reference simulation are compared to 5 members of a similar simulation in which the freshwater perturbation is applied. We add 0,00275 Sv to the freshwater fluxes at the Ocean-Atmosphere interface representing 5 times the present estimated melting of the Greenland ice. We highlight that such a freshwater forcing has significant impacts only in the north Atlantic basin where a rapid increase of the sea level rise occurs in the first 30 years, followed by a stagnation period. In the others areas, we point out that the effects of the perturbation is not significant compared to the regional variability. We show relations between sea-level and the meridional overturning circulation (MOC), and attempt to characterize the mechanisms at stake in the CNRM-CM5 model. We thus focus on the effects of the freshwater forcing on the oceanic processes in the North Atlantic basin, especially on the temperature and salinity variability in the subpolar gyre.

  20. Need Of Modelling Radionuclide Transport In Fresh Water Lake System, Subject To Indian Context

    NASA Astrophysics Data System (ADS)

    Desai, Hiral; Christian, R. A.

    2010-10-01

    The operations of nuclear facilities results in low level radioactive effluents, which are required to be released into the environment. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. The effluents released from installations into environment undergo dilution and dispersion. However, there is possibility of concentration by the biological process in the environment. Aquatic ecosystems are very complex webs of physical, chemical and biological interactions. It is generally both costly and laborious to describe their characteristics, and to predict them is even harder. Every aquatic ecosystem is unique, and yet it is impossible to study each system in the detail necessary for case-by-case assessment of ecological threats. In this situation, quantitative mathematical models are essential to predict, to guide assessment and to direct interventions.

  1. Experiments on a Model Eye

    ERIC Educational Resources Information Center

    Arell, Antti; Kolari, Samuli

    1978-01-01

    Explains a laboratory experiment dealing with the optical features of the human eye. Shows how to measure the magnification of the retina and the refractive anomaly of the eye could be used to measure the refractive power of the observer's eye. (GA)

  2. Factors controlling the configuration of the fresh-saline water interface in the Dead Sea coastal aquifers: Synthesis of TDEM surveys and numerical groundwater modeling

    USGS Publications Warehouse

    Yechieli, Y.; Kafri, U.; Goldman, M.; Voss, C.I.

    2001-01-01

    TDEM (time domain electromagnetic) traverses in the Dead Sea (DS) coastal aquifer help to delineate the configuration of the interrelated fresh-water and brine bodies and the interface in between. A good linear correlation exists between the logarithm of TDEM resistivity and the chloride concentration of groundwater, mostly in the higher salinity range, close to that of the DS brine. In this range, salinity is the most important factor controlling resistivity. The configuration of the fresh-saline water interface is dictated by the hydraulic gradient, which is controlled by a number of hydrological factors. Three types of irregularities in the configuration of fresh-water and saline-water bodies were observed in the study area: 1. Fresh-water aquifers underlying more saline ones ("Reversal") in a multi-aquifer system. 2. "Reversal" and irregular residual saline-water bodies related to historical, frequently fluctuating DS base level and respective interfaces, which have not undergone complete flushing. A rough estimate of flushing rates may be obtained based on knowledge of the above fluctuations. The occurrence of salt beds is also a factor affecting the interface configuration. 3. The interface steepens towards and adjacent to the DS Rift fault zone. Simulation analysis with a numerical, variable-density flow model, using the US Geological Survey's SUTRA code, indicates that interface steep- ening may result from a steep water-level gradient across the zone, possibly due to a low hydraulic conductivity in the immediate vicinity of the fault.

  3. An experiment with interactive planning models

    NASA Technical Reports Server (NTRS)

    Beville, J.; Wagner, J. H.; Zannetos, Z. S.

    1970-01-01

    Experiments on decision making in planning problems are described. Executives were tested in dealing with capital investments and competitive pricing decisions under conditions of uncertainty. A software package, the interactive risk analysis model system, was developed, and two controlled experiments were conducted. It is concluded that planning models can aid management, and predicted uses of the models are as a central tool, as an educational tool, to improve consistency in decision making, to improve communications, and as a tool for consensus decision making.

  4. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1995-01-01

    This program started in February 1991, and is designed to improve our understanding of basic combustion phenomena by the modeling of various configurations undergoing experimental study by others. Results through 1992 were reported in the second workshop. Work since that time has examined the following topics: Flame-balls; Intrinsic and acoustic instabilities in multiphase mixtures; Radiation effects in premixed combustion; Smouldering, both forward and reverse, as well as two dimensional smoulder.

  5. Changes in polyphenol profiles and color composition of freshly fermented model wine due to pulsed electric field, enzymes and thermovinification pretreatments.

    PubMed

    El Darra, Nada; Turk, Mohammad F; Ducasse, Marie-Agnès; Grimi, Nabil; Maroun, Richard G; Louka, Nicolas; Vorobiev, Eugène

    2016-03-01

    This work compares the effects of three pretreatments techniques: pulsed electric fields (PEFs), enzymes treatment (ET) and thermovinification (TV) on the improving of extraction of main phenolic compounds, color characteristics (L(∗)a(∗)b(∗)), and composition (copigmentation, non-discolored pigments) of freshly fermented model wine from Cabernet Sauvignon variety. The pretreatments produced differences in the wines, with the color of the freshly fermented model wine obtained from PEF and TV pretreated musts being the most different with an increase of 56% and 62%, respectively, compared to the control, while the color only increased by 22% for ET. At the end of the alcoholic fermentation, the contents of anthocyanins for all the pretreatments were not statistically different. However, for the content of total phenolics and total flavonols, PEF and TV were statistically different, but ET was not statistically different. The contents of flavonols in musts pretreated by PEF and TV were significantly higher comparing to the control. An increase of 48% and 97% was noted respectively, and only 4% for ET. A similar result was observed for the total phenolics with an increase by 18% and 32% respectively for PEF and TV, and only 3% for ET comparing to the control. The results suggest that the higher intensity and the difference of color composition between the control and pretreated freshly fermented model wines were not related only to a higher content of residual native polyphenols in these freshly fermented model wines. Other phenomena such as copigmentation and formation of derived pigments may be favored by these pretreatments.

  6. The database for reaching experiments and models.

    PubMed

    Walker, Ben; Kording, Konrad

    2013-01-01

    Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc.) from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by) multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM). DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  7. Temperature-Salinity Oscillations, Sudden Transitions, and Hysteresis in Laboratory Experiments and Layered Models

    NASA Astrophysics Data System (ADS)

    Whitehead, J. A.; Te Raa, L.; Tozuka, T.

    2002-12-01

    Simplified box models of the cooling of a salt-stratified ocean have been constructed in the laboratory. A large isothermal basin of water has two layers with differing salinity. Beside this is a small basin connected to the large basin by horizontal tubes at the top, middle and bottom. Calculations indicate that there is a sudden transition and hysteresis between a shallow and a deep convection state if there is a relaxation temperature boundary condition and also if one tube has large flow resistance. Our laboratory studies to date do not clearly show hysteresis but have relatively sudden changes in properties for some parameters. The shallow state is frequently found as an oscillation, and the deep convection state is steady, although thermals produce small rapid fluctuations. Numerical models of the experiments produce qualitative agreement, but quantitative differences are large. In contrast, experiments with a cavity at the bottom of a fresh water reservoir, subjected to steady heating from below and steady salt-water inflow has two distinct states, and exhibits a hysteresis range. Oscillations and transitions like those seen in these experiments may exist in natural bodies with a layer of fresh water cooled from above such as fjords, polar bays, or larger polar regions. The oscillation periods are much greater than either the fresh water or the thermal time scale, making the oscillation mechanism a candidate for climate oscillations. (Much of this work was done at the GFD summer program).

  8. Influence of the natural microbial flora on the acid tolerance response of Listeria monocytogenes in a model system of fresh meat decontamination fluids.

    PubMed

    Samelis, J; Sofos, J N; Kendall, P A; Smith, G C

    2001-06-01

    Depending on its composition and metabolic activity, the natural flora that may be established in a meat plant environment can affect the survival, growth, and acid tolerance response (ATR) of bacterial pathogens present in the same niche. To investigate this hypothesis, changes in populations and ATR of inoculated (10(5) CFU/ml) Listeria monocytogenes were evaluated at 35 degrees C in water (10 or 85 degrees C) or acidic (2% lactic or acetic acid) washings of beef with or without prior filter sterilization. The model experiments were performed at 35 degrees C rather than lower (8.0 log CFU/ml) by day 1. The pH of inoculated water washings decreased or increased depending on absence or presence of natural flora, respectively. These microbial and pH changes modulated the ATR of L. monocytogenes at 35 degrees C. In filter-sterilized water washings, inoculated L. monocytogenes increased its ATR by at least 1.0 log CFU/ml from days 1 to 8, while in unfiltered water washings the pathogen was acid tolerant at day 1 (0.3 to 1.4 log CFU/ml reduction) and became acid sensitive (3.0 to >5.0 log CFU/ml reduction) at day 8. These results suggest that the predominant gram-negative flora of an aerobic fresh meat plant environment may sensitize bacterial pathogens to acid.

  9. Argonne Bubble Experiment Thermal Model Development

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  10. Modeling choice and valuation in decision experiments.

    PubMed

    Loomes, Graham

    2010-07-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for participants to perceive probabilities and payoffs in a particular way. This model organizes more of the data than any other extant model and generates a number of novel testable implications which are examined with new data.

  11. Extracting Models in Single Molecule Experiments

    NASA Astrophysics Data System (ADS)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  12. Using Ecosystem Experiments to Improve Vegetation Models

    SciTech Connect

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; Prentice, I. Collin; Thornton, Peter E.; Wang, Shusen; Wang, Yingping; Weng, Ensheng; Iversen, Colleen M.; McCarthy, Heather R.; Warren, Jeffrey; Oren, Ram; Norby, Richard J

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced a clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.

  13. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGES

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; ...

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  14. Solar models, neutrino experiments, and helioseismology

    NASA Technical Reports Server (NTRS)

    Bahcall, John N.; Ulrich, Roger K.

    1988-01-01

    The event rates and their recognized uncertainties are calculated for 11 solar neutrino experiments using accurate solar models. These models are also used to evaluate the frequency spectrum of the p and g oscillations modes of the sun. It is shown that the discrepancy between the predicted and observed event rates in the Cl-37 and Kamiokande II experiments cannot be explained by a 'likely' fluctuation in input parameters with the best estimates and uncertainties given in the present study. It is suggested that, whatever the correct solution to the solar neutrino problem, it is unlikely to be a 'trival' error.

  15. Metal powder absorptivity: Modeling and experiment

    DOE PAGES

    Boley, C. D.; Mitchell, S. C.; Rubenchik, A. M.; ...

    2016-08-10

    Here, we present results of numerical modeling and direct calorimetric measurements of the powder absorptivity for a number of metals. The modeling results generally correlate well with experiment. We show that the powder absorptivity is determined, to a great extent, by the absorptivity of a flat surface at normal incidence. Our results allow the prediction of the powder absorptivity from normal flat-surface absorptivity measurements.

  16. Fresh Veggies from Space

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Professor Marc Anderson of the University of Wisconsin-Madison developed a technology for use in plant-growth experiments aboard the Space Shuttle. Anderson's research and WCSAR's technology were funded by NASA and resulted in a joint technology licensed to KES Science and Technology, Inc. This transfer of space-age technology resulted in the creation of a new plant-saving product, an ethylene scrubber for plant growth chambers. This innovation presents commercial benefits for the food industry in the form of a new device, named Bio-KES. Bio-KES removes ethylene and helps to prevent spoilage. Ethylene accounts for up to 10 percent of produce losses and 5 percent of flower losses. Using Bio-KES in storage rooms and displays will increase the shelf life of perishable foods by more than one week, drastically reducing the costs associated with discarded rotten foods and flowers. The savings could potentially be passed on to consumers. For NASA, the device means that astronauts can conduct commercial agricultural research in space. Eventually, it may also help to grow food in space and keep it fresh longer. This could lead to less packaged food being taken aboard missions since it could be cultivated in an ethylene-free environment.

  17. Data production models for the CDF experiment

    SciTech Connect

    Antos, J.; Babik, M.; Benjamin, D.; Cabrera, S.; Chan, A.W.; Chen, Y.C.; Coca, M.; Cooper, B.; Genser, K.; Hatakeyama, K.; Hou, S.; Hsieh, T.L.; Jayatilaka, B.; Kraan, A.C.; Lysak, R.; Mandrichenko, I.V.; Robson, A.; Siket, M.; Stelzer, B.; Syu, J.; Teng, P.K.; /Kosice, IEF /Duke U. /Taiwan, Inst. Phys. /University Coll. London /Fermilab /Rockefeller U. /Michigan U. /Pennsylvania U. /Glasgow U. /UCLA /Tsukuba U. /New Mexico U.

    2006-06-01

    The data production for the CDF experiment is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production models that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the experiment.

  18. Model-scale sound propagation experiment

    NASA Technical Reports Server (NTRS)

    Willshire, William L., Jr.

    1988-01-01

    The results of a scale model propagation experiment to investigate grazing propagation above a finite impedance boundary are reported. In the experiment, a 20 x 25 ft ground plane was installed in an anechoic chamber. Propagation tests were performed over the plywood surface of the ground plane and with the ground plane covered with felt, styrofoam, and fiberboard. Tests were performed with discrete tones in the frequency range of 10 to 15 kHz. The acoustic source and microphones varied in height above the test surface from flush to 6 in. Microphones were located in a linear array up to 18 ft from the source. A preliminary experiment using the same ground plane, but only testing the plywood and felt surfaces was performed. The results of this first experiment were encouraging, but data variability and repeatability were poor, particularly, for the felt surface, making comparisons with theoretical predictions difficult. In the main experiment the sound source, microphones, microphone positioning, data acquisition, quality of the anechoic chamber, and environmental control of the anechoic chamber were improved. High-quality, repeatable acoustic data were measured in the main experiment for all four test surfaces. Comparisons with predictions are good, but limited by uncertainties of the impedance values of the test surfaces.

  19. Modeling Hemispheric Detonation Experiments in 2-Dimensions

    SciTech Connect

    Howard, W M; Fried, L E; Vitello, P A; Druce, R L; Phillips, D; Lee, R; Mudge, S; Roeske, F

    2006-06-22

    Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data, including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.

  20. Fresh Water Life.

    ERIC Educational Resources Information Center

    Kestler, Carol Susan

    1991-01-01

    Describes methodology for a fresh water life study with elementary through college age students with suggestions for proper equipment, useful guides, and other materials. Proposes an activity for the collection and study of plankton. Includes background information.(MCO)

  1. Fresh water fish, Channa punctatus, as a model for pendimethalin genotoxicity testing: A new approach toward aquatic environmental contaminants.

    PubMed

    Ahmad, Irshad; Ahmad, Masood

    2016-11-01

    Pendimethalin (PND) is one of the common herbicides used worldwide. Fresh water fish, Channa punctatus, was exposed to PND in aquaria wherein its LC50 value was recorded to be 3.6 mg/L. Three sublethal (SL) concentrations, namely, 0.9, 1.8, and 2.7 mg/L were selected for the evaluation of genotoxicity and oxidative stress generated in the fish. In vivo comet assay was carried out in the blood, liver, and gill cells after exposing the fish to aforesaid SL concentrations of PND for 24, 48, 72, and 96 h. The results of the comet assay demonstrated the genotoxicity of PND in all the three tissues. Induction of oxidative stress in the gill cells was affirmed by the increased lipid peroxidation (LPO) and decreased levels of reduced glutathione, superoxide dismutase, and catalase. Frequencies of erythrocytic nuclear abnormalities (ENA) and micronuclei (MN) were also used to assess the genotoxic potential of PND on C. punctatus. MN frequency did not show any enhancement after PND exposure, but the frequency of ENA such as kidney-shaped nuclei, segmented nuclei and lobed nuclei, showed a significant increase after 24-96 h. Thus, ENA seems to be a better biomarker than MN for PND induced genotoxicity. © 2015 Wiley Periodicals, Inc. Environ Toxicol 31: 1520-1529, 2016.

  2. Field and modelling investigations of fresh-water plume behaviour in response to infrequent high-precipitation events, Sydney Estuary, Australia

    NASA Astrophysics Data System (ADS)

    B., Serena; Lee | Gavin, F.; Birch | Charles, J.; Lemckert

    2011-05-01

    Runoff from the urban environment is a major contributor of non-point source contamination for many estuaries, yet the ultimate fate of this stormwater within the estuary is frequently unknown in detail. The relationship between catchment rainfall and estuarine response within the Sydney Estuary (Australia) was investigated in the present study. A verified hydrodynamic model (Environmental Fluid Dynamics Computer Code) was utilised in concert with measured salinity data and rainfall measurements to determine the relationship between rainfall and discharge to the estuary, with particular attention being paid to a significant high-precipitation event. A simplified rational method for calculating runoff based upon daily rainfall, subcatchment area and runoff coefficients was found to replicate discharge into the estuary associated with the monitored event. Determining fresh-water supply based upon estuary conditions is a novel technique which may assist those researching systems where field-measured runoff data are not available and where minor field-measured information on catchment characteristics are obtainable. The study concluded that since the monitored fresh-water plume broke down within the estuary, contaminants associated with stormwater runoff due to high-precipitation events (daily rainfall > 50 mm) were retained within the system for a longer period than was previously recognised.

  3. Cell motility: Combining experiments with modeling

    NASA Astrophysics Data System (ADS)

    Rappel, Wouter-Jan

    2013-03-01

    Cell migration and motility is a pervasive process in many biology systems. It involves intra-cellular signal transduction pathways that eventually lead to membrane extension and contraction. Here we describe our efforts to combine quantitative experiments with theoretical and computational modeling to gain fundamental insights into eukaryotic cell motion. In particular, we will focus on the amoeboid motion of Dictyostelium discoideum cells. This work is supported by the National Institutes of Health (P01 GM078586)

  4. Background modeling for the GERDA experiment

    SciTech Connect

    Becerici-Schmidt, N.; Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  5. Data Assimilation and Model Evaluation Experiment Datasets.

    NASA Astrophysics Data System (ADS)

    Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.

    1994-05-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean modeling and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.

  6. Data assimilation and model evaluation experiment datasets

    NASA Technical Reports Server (NTRS)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  7. Modelling growth of Escherichia coli O157:H7 in fresh-cut lettuce submitted to commercial process conditions: chlorine washing and modified atmosphere packaging.

    PubMed

    Posada-Izquierdo, Guiomar D; Pérez-Rodríguez, Fernando; López-Gálvez, Francisco; Allende, Ana; Selma, María V; Gil, María I; Zurera, Gonzalo

    2013-04-01

    Fresh-cut iceberg lettuce inoculated with Escherichia coli O157:H7 was submitted to chlorine washing (150 mg/mL) and modified atmosphere packaging on laboratory scale. Populations of E. coli O157:H7 were assessed in fresh-cut lettuce stored at 4, 8, 13 and 16 °C using 6-8 replicates in each analysis point in order to capture experimental variability. The pathogen was able to grow at temperatures ≥8 °C, although at low temperatures, growth data presented a high variability between replicates. Indeed, at 8 °C after 15 days, some replicates did not show growth while other replicates did present an increase. A growth primary model was fitted to the raw growth data to estimate lag time and maximum growth rate. The prediction and confidence bands for the fitted growth models were estimated based on Monte-Carlo method. The estimated maximum growth rates (log cfu/day) corresponded to 0.14 (95% CI: 0.06-0.31), 0.55 (95% CI: 0.17-1.20) and 1.43 (95% CI: 0.82-2.15) for 8, 13 and 16 °C, respectively. A square-root secondary model was satisfactorily derived from the estimated growth rates (R(2) > 0.80; Bf = 0.97; Af = 1.46). Predictive models and data obtained in this study are intended to improve quantitative risk assessment studies for E. coli O157:H7 in leafy green products.

  8. Observation simulation experiments with regional prediction models

    NASA Technical Reports Server (NTRS)

    Diak, George; Perkey, Donald J.; Kalb, Michael; Robertson, Franklin R.; Jedlovec, Gary

    1990-01-01

    Research efforts in FY 1990 included studies employing regional scale numerical models as aids in evaluating potential contributions of specific satellite observing systems (current and future) to numerical prediction. One study involves Observing System Simulation Experiments (OSSEs) which mimic operational initialization/forecast cycles but incorporate simulated Advanced Microwave Sounding Unit (AMSU) radiances as input data. The objective of this and related studies is to anticipate the potential value of data from these satellite systems, and develop applications of remotely sensed data for the benefit of short range forecasts. Techniques are also being used that rely on numerical model-based synthetic satellite radiances to interpret the information content of various types of remotely sensed image and sounding products. With this approach, evolution of simulated channel radiance image features can be directly interpreted in terms of the atmospheric dynamical processes depicted by a model. Progress is being made in a study using the internal consistency of a regional prediction model to simplify the assessment of forced diabatic heating and moisture initialization in reducing model spinup times. Techniques for model initialization are being examined, with focus on implications for potential applications of remote microwave observations, including AMSU and Special Sensor Microwave Imager (SSM/I), in shortening model spinup time for regional prediction.

  9. Microbial Performance of Food Safety Control and Assurance Activities in a Fresh Produce Processing Sector Measured Using a Microbial Assessment Scheme and Statistical Modeling.

    PubMed

    Njage, Patrick Murigu Kamau; Sawe, Chemutai Tonui; Onyango, Cecilia Moraa; Habib, I; Njagi, Edmund Njeru; Aerts, Marc; Molenberghs, Geert

    2017-01-01

    Current approaches such as inspections, audits, and end product testing cannot detect the distribution and dynamics of microbial contamination. Despite the implementation of current food safety management systems, foodborne outbreaks linked to fresh produce continue to be reported. A microbial assessment scheme and statistical modeling were used to systematically assess the microbial performance of core control and assurance activities in five Kenyan fresh produce processing and export companies. Generalized linear mixed models and correlated random-effects joint models for multivariate clustered data followed by empirical Bayes estimates enabled the analysis of the probability of contamination across critical sampling locations (CSLs) and factories as a random effect. Salmonella spp. and Listeria monocytogenes were not detected in the final products. However, none of the processors attained the maximum safety level for environmental samples. Escherichia coli was detected in five of the six CSLs, including the final product. Among the processing-environment samples, the hand or glove swabs of personnel revealed a higher level of predicted contamination with E. coli , and 80% of the factories were E. coli positive at this CSL. End products showed higher predicted probabilities of having the lowest level of food safety compared with raw materials. The final products were E. coli positive despite the raw materials being E. coli negative for 60% of the processors. There was a higher probability of contamination with coliforms in water at the inlet than in the final rinse water. Four (80%) of the five assessed processors had poor to unacceptable counts of Enterobacteriaceae on processing surfaces. Personnel-, equipment-, and product-related hygiene measures to improve the performance of preventive and intervention measures are recommended.

  10. Ballistic Response of Fabrics: Model and Experiments

    NASA Astrophysics Data System (ADS)

    Orphal, Dennis L.; Walker Anderson, James D., Jr.

    2001-06-01

    Walker (1999)developed an analytical model for the dynamic response of fabrics to ballistic impact. From this model the force, F, applied to the projectile by the fabric is derived to be F = 8/9 (ET*)h^3/R^2, where E is the Young's modulus of the fabric, T* is the "effective thickness" of the fabric and equal to the ratio of the areal density of the fabric to the fiber density, h is the displacement of the fabric on the axis of impact and R is the radius of the fabric deformation or "bulge". Ballistic tests against Zylon^TM fabric have been performed to measure h and R as a function of time. The results of these experiments are presented and analyzed in the context of the Walker model. Walker (1999), Proceedings of the 18th International Symposium on Ballistics, pp. 1231.

  11. Process modelling for Space Station experiments

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Rosenberger, Franz; Nadarajah, Arunan; Ouazzani, Jalil; Amiroudine, Sakir

    1990-01-01

    Examined here is the sensitivity of a variety of space experiments to residual accelerations. In all the cases discussed the sensitivity is related to the dynamic response of a fluid. In some cases the sensitivity can be defined by the magnitude of the response of the velocity field. This response may involve motion of the fluid associated with internal density gradients, or the motion of a free liquid surface. For fluids with internal density gradients, the type of acceleration to which the experiment is sensitive will depend on whether buoyancy driven convection must be small in comparison to other types of fluid motion, or fluid motion must be suppressed or eliminated. In the latter case, the experiments are sensitive to steady and low frequency accelerations. For experiments such as the directional solidification of melts with two or more components, determination of the velocity response alone is insufficient to assess the sensitivity. The effect of the velocity on the composition and temperature field must be considered, particularly in the vicinity of the melt-crystal interface. As far as the response to transient disturbances is concerned, the sensitivity is determined by both the magnitude and frequency of the acceleration and the characteristic momentum and solute diffusion times. The microgravity environment, a numerical analysis of low gravity tolerance of the Bridgman-Stockbarger technique, and modeling crystal growth by physical vapor transport in closed ampoules are discussed.

  12. Argonne Bubble Experiment Thermal Model Development II

    SciTech Connect

    Buechler, Cynthia Eileen

    2016-07-01

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.

  13. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  14. Robust linear and non-linear models of NIR spectroscopy for detection and quantification of adulterants in fresh and frozen-thawed minced beef.

    PubMed

    Morsy, Noha; Sun, Da-Wen

    2013-02-01

    This study aimed to evaluate the potential of near infrared spectroscopy (NIRS) as a fast and non-destructive tool for detecting and quantifying different adulterants in fresh and frozen-thawed minced beef. Partial least squares regression (PLSR) models were built under cross validation and tested with different independent data sets, yielding determination coefficients (R(P)(2)) of 0.96, 0.94 and 0.95 with standard error of prediction (SEP) of 5.39, 5.12 and 2.08% (w/w) for minced beef adulterated by pork, fat trimming and offal, respectively. The performance of the developed models declined when the samples were in a frozen-thawed condition, yielding R(P)(2) of 0.93, 0.82 and 0.95 with simultaneous augments in the SEP of 7.11, 9.10 and 2.38% (w/w), respectively. Linear discriminant analysis (LDA), partial least squares-discriminant analysis (PLS-DA) and non-linear regression models (logistic, probit and exponential regression) were developed at the most relevant wavelengths to discriminate between the pure (unadulterated) and adulterated minced beef. The classification accuracy resulting from both types of models was quite high, especially the LDA, PLS-DA and exponential regression models which yielded 100% accuracy. The current study demonstrated that the VIS-NIR spectroscopy can be utilized securely to detect and quantify the amount of adulterants added to the minced beef with acceptable precision and accuracy.

  15. Forces between permanent magnets: experiments and model

    NASA Astrophysics Data System (ADS)

    González, Manuel I.

    2017-03-01

    This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r -4 at large distances, as expected.

  16. Bucky gel actuator displacement: experiment and model

    NASA Astrophysics Data System (ADS)

    Ghamsari, A. K.; Jin, Y.; Zegeye, E.; Woldesenbet, E.

    2013-02-01

    Bucky gel actuator (BGA) is a dry electroactive nanocomposite which is driven with a few volts. BGA’s remarkable features make this tri-layered actuator a potential candidate for morphing applications. However, most of these applications would require a better understanding of the effective parameters that influence the BGA displacement. In this study, various sets of experiments were designed to investigate the effect of several parameters on the maximum lateral displacement of BGA. Two input parameters, voltage and frequency, and three material/design parameters, carbon nanotube type, thickness, and weight fraction of constituents were selected. A new thickness ratio term was also introduced to study the role of individual layers on BGA displacement. A model was established to predict BGA maximum displacement based on the effect of these parameters. This model showed good agreement with reported results from the literature. In addition, an important factor in the design of BGA-based devices, lifetime, was investigated.

  17. Conceptualization of a fresh groundwater lens influenced by climate change: A modeling study of an arid-region island in the Persian Gulf, Iran

    NASA Astrophysics Data System (ADS)

    Mahmoodzadeh, Davood; Ketabchi, Hamed; Ataie-Ashtiani, Behzad; Simmons, Craig T.

    2014-11-01

    Understanding the fresh groundwater lens (FGL) behavior and potential threat of climatic-induced seawater intrusion (SWI) are significant for the future water resources management of many small islands. In this paper, the FGL of Kish Island, an arid-region case in the Persian Gulf, Iran, is modeled using two-dimensional (2D) and three-dimensional (3D) simulations. These simulations are based on the application of SUTRA, a density-dependent groundwater numerical model. Also, the numerical model parameters are calibrated using PEST, an automated parameter estimation code. Firstly a detailed conceptualization of the FGL model is completed to understand the sensitivity of the FGL to some particular aspects of the model prior to analysis of climate change simulations. For these investigations, the FGL system is defined based on Kish Island system to accomplish the integrated comparison of features of a conceptual model that are representative of real-world systems. This is the first study which adopts such an approach. The comparison of cross-sectional simulations suggests that the two-layer properties of the Kish Island aquifer have a significant influence on the FGL while the impacts of lateral-boundary irregularities are negligible. The impacts of sea-level rise (SLR), associated land-surface inundation (LSI), and variations in recharge rate on the FGL salinization of Kish Island are investigated numerically. Variations of SLR value (1-4 m) and net recharge rate (17-24 mm/year) are considered to cover a possible range of climatic scenarios in this arid-region island. The 2D and 3D simulation results demonstrate that LSI caused by SLR and recharge rate variation impacts are more important factors in the FGL in comparison to estimated SLR impacts without LSI. It is also shown that climate change impacts on the FGL are long-term to reach a new FGL equilibrium in the case of Kish Island's aquifer system. The comparative analysis of 2D and 3D results shows that three

  18. Microbial Successions Are Associated with Changes in Chemical Profiles of a Model Refrigerated Fresh Pork Sausage during an 80-Day Shelf Life Study

    PubMed Central

    David, Jairus R. D.; Gilbreth, Stefanie Evans; Smith, Gordon; Nietfeldt, Joseph; Legge, Ryan; Kim, Jaehyoung; Sinha, Rohita; Duncan, Christopher E.; Ma, Junjie; Singh, Indarpal

    2014-01-01

    Fresh pork sausage is produced without a microbial kill step and therefore chilled or frozen to control microbial growth. In this report, the microbiota in a chilled fresh pork sausage model produced with or without an antimicrobial combination of sodium lactate and sodium diacetate was studied using a combination of traditional microbiological methods and deep pyrosequencing of 16S rRNA gene amplicons. In the untreated system, microbial populations rose from 102 to 106 CFU/g within 15 days of storage at 4°C, peaking at nearly 108 CFU/g by day 30. Pyrosequencing revealed a complex community at day 0, with taxa belonging to the Bacilli, Gammaproteobacteria, Betaproteobacteria, Actinobacteria, Bacteroidetes, and Clostridia. During storage at 4°C, the untreated system displayed a complex succession, with species of Weissella and Leuconostoc that dominate the product at day 0 being displaced by species of Pseudomonas (P. lini and P. psychrophila) within 15 days. By day 30, a second wave of taxa (Lactobacillus graminis, Carnobacterium divergens, Buttiauxella brennerae, Yersinia mollaretti, and a taxon of Serratia) dominated the population, and this succession coincided with significant chemical changes in the matrix. Treatment with lactate-diacetate altered the dynamics dramatically, yielding a monophasic growth curve of a single species of Lactobacillus (L. graminis), followed by a uniform selective die-off of the majority of species in the population. Of the six species of Lactobacillus that were routinely detected, L. graminis became the dominant member in all samples, and its origins were traced to the spice blend used in the formulation. PMID:24928886

  19. Predictive modeling for growth of non- and cold-adapted Listeria Monocytogenes on fresh-cut cantaloupe at different storage temperatures

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The aim of this study was to determine the growth kinetics of Listeria monocytogenes, with and without cold-adaption, on fresh-cut cantaloupe under different storage temperatures. Fresh-cut samples, spot inoculated with a four-strain cocktail of L. monocytogenes (about 3.2 log CFU/g), were exposed t...

  20. Fresh Frozen Plasma

    DTIC Science & Technology

    2009-03-01

    therapeutic means). FFP can be prepared either by separation from whole blood or collection via plasmapheresis . Fresh frozen plasma contains the...FFP can be further separated into cryoprecipitate and what is known as “cryo-poor plasma,” a product rarely used for therapeutic means. Plasma is the

  1. Long-Term Stored Hemoglobin-Vesicles, a Cellular Type of Hemoglobin-Based Oxygen Carrier, Has Resuscitative Effects Comparable to That for Fresh Red Blood Cells in a Rat Model with Massive Hemorrhage without Post-Transfusion Lung Injury

    PubMed Central

    Yamasaki, Keishi; Sakai, Hiromi; Otagiri, Masaki

    2016-01-01

    Hemoglobin-vesicles (HbV), encapsulating highly concentrated human hemoglobin in liposomes, were developed as a substitute for red blood cells (RBC) and their safety and efficacy in transfusion therapy has been confirmed in previous studies. Although HbV suspensions are structurally and physicochemically stabile for least 1-year at room temperature, based on in vitro experiments, the issue of whether the use of long-term stored HbV after a massive hemorrhage can be effective in resuscitations without adverse, post-transfusion effects remains to be clarified. We report herein on a comparison of the systemic response and the induction of organ injuries in hemorrhagic shock model rats resuscitated using 1-year-stored HbV, freshly packed RBC (PRBC-0) and by 28-day-stored packed RBC (PRBC-28). The six-hour mortality after resuscitation was not significantly different among the groups. Arterial blood pressure and blood gas parameters revealed that, using HbV, recovery from the shock state was comparable to that when PRBC-0 was used. Although no significant change was observed in serum parameters reflecting liver and kidney injuries at 6 hours after resuscitation among the three resuscitation groups, results based on Evans Blue and protein leakage in bronchoalveolar lavage fluid, the lung wet/dry weight ratio and histopathological findings indicated that HbV as well as PRBC-0 was less predisposed to result in a post-transfusion lung injury than PRBC-28, as evidenced by low levels of myeloperoxidase accumulation and subsequent oxidative damage in the lung. The findings reported herein indicate that 1-year-stored HbV can effectively function as a resuscitative fluid without the induction of post-transfused lung injury and that it is comparable to fresh PRBC, suggesting that HbV is a promising RBC substitute with a long shelf-life. PMID:27798697

  2. Fresh embryo donation for human embryonic stem cell (hESC) research: the experiences and values of IVF couples asked to be embryo donors

    PubMed Central

    Haimes, E.; Taylor, K.

    2009-01-01

    BACKGROUND This article reports on an investigation of the views of IVF couples asked to donate fresh embryos for research and contributes to the debates on: the acceptability of human embryonic stem cell (hESC) research, the moral status of the human embryo and embryo donation for research. METHODS A hypothesis-generating design was followed. All IVF couples in one UK clinic who were asked to donate embryos in 1 year were contacted 6 weeks after their pregnancy result. Forty four in-depth interviews were conducted. RESULTS Interviewees were preoccupied with IVF treatment and the request to donate was a secondary consideration. They used a complex and dynamic system of embryo classification. Initially, all embryos were important but then their focus shifted to those that had most potential to produce a baby. At that point, ‘other’ embryos were less important though they later realise that they did not know what happened to them. Guessing that these embryos went to research, interviewees preferred not to contemplate what that might entail. The embryos that caused interviewees most concern were good quality embryos that might have produced a baby but went to research instead. ‘The’ embryo, the morally laden, but abstract, entity, did not play a central role in their decision-making. CONCLUSIONS This study, despite missing those who refuse to donate embryos, suggests that debates on embryo donation for hESC research should include the views of embryo donors and should consider the social, as well as the moral, status of the human embryo. PMID:19502616

  3. Full-Scale Cookoff Model Validation Experiments

    SciTech Connect

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  4. Nanofluid Drop Evaporation: Experiment, Theory, and Modeling

    NASA Astrophysics Data System (ADS)

    Gerken, William James

    Nanofluids, stable colloidal suspensions of nanoparticles in a base fluid, have potential applications in the heat transfer, combustion and propulsion, manufacturing, and medical fields. Experiments were conducted to determine the evaporation rate of room temperature, millimeter-sized pendant drops of ethanol laden with varying amounts (0-3% by weight) of 40-60 nm aluminum nanoparticles (nAl). Time-resolved high-resolution drop images were collected for the determination of early-time evaporation rate (D2/D 02 > 0.75), shown to exhibit D-square law behavior, and surface tension. Results show an asymptotic decrease in pendant drop evaporation rate with increasing nAl loading. The evaporation rate decreases by approximately 15% at around 1% to 3% nAl loading relative to the evaporation rate of pure ethanol. Surface tension was observed to be unaffected by nAl loading up to 3% by weight. A model was developed to describe the evaporation of the nanofluid pendant drops based on D-square law analysis for the gas domain and a description of the reduction in liquid fraction available for evaporation due to nanoparticle agglomerate packing near the evaporating drop surface. Model predictions are in relatively good agreement with experiment, within a few percent of measured nanofluid pendant drop evaporation rate. The evaporation of pinned nanofluid sessile drops was also considered via modeling. It was found that the same mechanism for nanofluid evaporation rate reduction used to explain pendant drops could be used for sessile drops. That mechanism is a reduction in evaporation rate due to a reduction in available ethanol for evaporation at the drop surface caused by the packing of nanoparticle agglomerates near the drop surface. Comparisons of the present modeling predictions with sessile drop evaporation rate measurements reported for nAl/ethanol nanofluids by Sefiane and Bennacer [11] are in fairly good agreement. Portions of this abstract previously appeared as: W. J

  5. Electrical Monitoring of Fresh Water Displacement in a Brackish Aquifer During Aquifer Storage and Recovery: Forward and Inverse Modeling Results

    NASA Astrophysics Data System (ADS)

    Levannier, A.; Delhomme, J.

    2003-12-01

    Aquifer storage and recovery (ASR) projects are now used to temporarily store water in the subsurface and to recover it when needed. When freshwater is injected into a brackish aquifer, a transition zone forms, due to mixing, diffusion and gravity. The front displacement and the width of the transition zone depend on the characteristics of the aquifer but, from repeated surveys conducted with an array of downhole electrodes placed against the borehole wall, the changes in the front position/shape can be continuously monitored. Synthetic data were created for a targeted ASR situation through hydrodynamic and hydrodispersive modeling (performed with a finite difference scheme) that gave the salt concentration distribution in the aquifer, as a function of space and time, during ASR inject/store/pump cycles. Concentrations were converted first into water resistivity values Rw, and then into formation resistivity values Rt through Archie's law (1) calibrated on logging data: \\begin{equation} R_{t}=\\frac{a}{\\phi^{m}}R_w where φ is the porosity, and a and m depend on the lithology. Based on this information, the response of downhole electrodes was computed by solving equation (2) (using a finite element modeling code) for electrical surveys conducted at repeated times during the planned ASR cycles, and in particular during the initial ASR testing phase: \\begin{equation} \

  6. Vacuum membrane distillation: Experiments and modeling

    SciTech Connect

    Bandini, S.; Saavedra, A.; Sarti, G.C.

    1997-02-01

    Vacuum membrane distillation is a membrane-based separation process considered here to remove volatile organic compounds from aqueous streams. Microporous hydrophobic membranes are used to separate the aqueous stream from a gas phase kept under vacuum. The evaporation of the liquid stream takes place on one side of the membrane, and mass transfer occurs through the vapor phase inside the membrane. The role of operative conditions on the process performance is widely investigated in the case of dilute binary aqueous mixtures containing acetone, ethanol, isopropanol, ethylacetate, methylacetate, or methylterbutyl ether. Temperature, composition, flow rate of the liquid feed, and pressure downstream the membrane are the main operative variables. Among these, the vacuum-side pressure is the major design factor since it greatly affects the separation efficiency. A mathematical model description of the process is developed, and the results are compared with the experiments. The model is finally used to predict the best operative conditions in which the process can work for the case of benzene removal from waste waters.

  7. Experiment-Driven Modeling of Plasmonic Nanostructures

    NASA Astrophysics Data System (ADS)

    Hryn, Alexander John

    Plasmonic nanostructures can confine light at their surface in the form of surface plasmon polaritons (SPPs) or localized surface plasmons (LSPs) depending on their geometry. SPPs are excited on nano- and micropatterned surfaces, where the typical feature size is on the order of the wavelength of light. LSPs, on the other hand, can be excited on nanoparticles much smaller than the diffraction limit. In both cases, far-field optical measurements are used to infer the excited plasmonic modes, and theoretical models are used to verify those results. Typically, these theoretical models are tailored to match the experimental nanostructures in order to explain observed phenomena. In this thesis, I explore incorporating components of experimental procedures into the models to increase the accuracy of the simulated result, and to inform the design of future experiments. First, I examine SPPs on nanostructured metal films in the form of low-symmetry moire plasmonic crystals. I created a general Bragg model to understand and predict the excited SPP modes in moire plasmonic crystals based on the nanolithography masks used in their fabrication. This model makes use of experimental parameters such as periodicity, azimuthal rotation, and number of sequential exposures to predict the energies of excited SPP modes and the opening of plasmonic band gaps. The model is further expanded to apply to multiscale gratings, which have patterns that contain hierarchical periodicities: a sub-micron primary periodicity, and microscale superperiodicity. A new set of rules was established to determine how superlattice SPPs are excited, and informed development of a new fabrication technique to create superlattices with multiple primary periodicities that absorb light over a wider spectral range than other plasmonic structures. The second half of the thesis is based on development of finite-difference time-domain (FDTD) simulations of plasmonic nanoparticles. I created a new technique to model

  8. Modeling prevalence and counts from most probable number in a bayesian framework: an application to Salmonella typhimurium in fresh pork sausages.

    PubMed

    Gonzales-Barron, Ursula; Redmond, Grainne; Butler, Francis

    2010-08-01

    Prevalence and counts of Salmonella Typhimurium in fresh pork sausage packs at the point of retail were modeled by using Irish and United Kingdom retail surveys' data. A methodology for modeling a second-order distribution for the initial Salmonella concentration (lambda0) in pork sausage at retail was presented considering the uncertainty originated from the most probable-number (MPN) serial dilutions. A conditional probability of observing the tube counts given true Salmonella concentration in a contaminated pack was built from the MPN triplets of every sausage tested. A posterior distribution was then modeled under the assumption that the counts from each of the portions of sausage mix stuffed into casings (and subsequently packed) are Poisson distributed. In order to model the variability of lambda0 among contaminated sausage packs, MPN uncertainties were propagated to a predefined lognormal distribution. Because the sausage samples from the Irish survey were frozen prior to MPN analysis (which is expected to cause reduction in viable cells), the resulting distribution for lambda0 appeared greatly underestimated (mean: 0.514 CFU/g; 95% confidence interval [CI]: 0.02 to 2.74 CFU/g). The lambda0 distribution produced with the United Kingdom survey data (mean: 69.7 CFU/g; 95% CI: 15 to 200 CFU/g) was, however, more conservative, and is to be used along with the fitted distribution for prevalence of Salmonella Typhimurium in pork sausage packs in Ireland (gamma[37.997, 0.0013]; mean: 0.046; 95% CI: 0.032 to 0.064) as the main inputs of a stochastic consumer-phase exposure assessment model.

  9. Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments

    DTIC Science & Technology

    2015-09-30

    Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments Jason D. Sagers Applied Research Laboratories...statistical inference methodologies for ocean-acoustic problems by investigating and applying statistical methods to data collected from scale -model...experiments over a translationally invariant wedge, (2) to plan and conduct 3D propagation experiments over the Hudson Canyon scale -model bathymetry, and (3

  10. What Is the True Color of Fresh Meat? A Biophysical Undergraduate Laboratory Experiment Investigating the Effects of Ligand Binding on Myoglobin Using Optical, EPR, and NMR Spectroscopy

    ERIC Educational Resources Information Center

    Linenberger, Kimberly; Bretz, Stacey Lowery; Crowder, Michael W.; McCarrick, Robert; Lorigan, Gary A.; Tierney, David L.

    2011-01-01

    With an increased focus on integrated upper-level laboratories, we present an experiment integrating concepts from inorganic, biological, and physical chemistry content areas. Students investigate the effects of ligand strength on the spectroscopic properties of the heme center in myoglobin using UV-vis, [superscript 1]H NMR, and EPR…

  11. Developing a Model using High School Students for Restoring, Monitoring and Conducting Research in Fresh Water Wetlands

    NASA Astrophysics Data System (ADS)

    Blueford, J. R.

    2010-12-01

    Tule Ponds at Tyson Lagoon in eastern San Francisco Bay is one of the largest sag ponds created by the Hayward Fault that has not been destroyed by urbanization. In the 1990’s Alameda County Flood Control and Water Conservation District designed a constructed wetland to naturally filter stormwater before it entered Tyson Lagoon on its way to the San Francisco Bay. The Math Science Nucleus, a non profit organization, manages the facility that incorporates high school students through community service, service learning, and research. Students do a variety of tasks from landscaping to scientific monitoring. Through contracts and grants, we create different levels of competency that the students can participate. Engineers and scientists from the two agencies involved, create tasks that are needed to be complete for successful restoration. Every year the students work on different components of restoration. A group of select student interns (usually juniors and seniors) collects and records the data during the year. Some of these students are part of a paid internship to insure their regular attendance. Every year the students compile and discuss with scientists from the Math Science Nucleus what the data set might mean and how problems can be improved. The data collected helps determine other longer term projects. This presentation will go over the journey of the last 10 years to this very successful program and will outline the steps necessary to maintain a restoration project. It will also outline the different groups that do larger projects (scouts) and liaisons with schools that allow teachers to assign projects at our facility. The validity of the data obtained by students and how we standardize our data collection from soil analysis, water chemistry, monitoring faults, and biological observations will be discussed. This joint agency model of cooperation to provide high school students with a real research opportunity has benefits that allow the program to

  12. Estimation of Fresh Water and Salt Transports in the Northern Indian Ocean Using Aquarius and Model Simulations

    NASA Astrophysics Data System (ADS)

    D'Addezio, J. M.; Bulusu, S.; Murty, V. S. N.; Nyadjro, E. S.

    2014-12-01

    The Northern Indian Ocean presents a unique dipolar Sea Surface Salinity (SSS) structure with the salty Arabian Sea (AS) on the west and the fresher Bay of Bengal (BoB) on the east. By using a combination of observational data, reanalyses, and model studies, the salinity structure of this dichotomous yet interconnected region is quantified. At the surface, the largest driver of salinity interseasonal variability is caused by the monsoonal winds and their ability to transport volume between the two water masses. Time-depth profiles reveal a rich vertical salinity profile. The AS presents with a mild salinity inversion, with salty waters above fresher ones for the majority of each annual cycle. This vertical gradient is approximately 1 psu between the surface and 200m depth. In the BoB the opposite occurs, where larger volumes of precipitation and river runoff create a lens of freshwater from the surface to approximately 50m depth year around. Salt and freshwater fluxes at the surface show a strong zonal component between the two basins along Sri Lanka twice a year. Within the basins, meridional fluxes dominate especially along the coastal regions where the EICC and WICC flow. Meridional depth-integrated salt, freshwater, and volume transports along a slice of each basin at 6°N reveal the approximate time its takes for each basin to return to equilibrium after strong transports during each monsoonal seasons advect salt and/or freshwater into or out of each respective region.

  13. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  14. Braiding DNA: experiments, simulations, and models.

    PubMed

    Charvin, G; Vologodskii, A; Bensimon, D; Croquette, V

    2005-06-01

    DNA encounters topological problems in vivo because of its extended double-helical structure. As a consequence, the semiconservative mechanism of DNA replication leads to the formation of DNA braids or catenanes, which have to be removed for the completion of cell division. To get a better understanding of these structures, we have studied the elastic behavior of two braided nicked DNA molecules using a magnetic trap apparatus. The experimental data let us identify and characterize three regimes of braiding: a slightly twisted regime before the formation of the first crossing, followed by genuine braids which, at large braiding number, buckle to form plectonemes. Two different approaches support and quantify this characterization of the data. First, Monte Carlo (MC) simulations of braided DNAs yield a full description of the molecules' behavior and their buckling transition. Second, modeling the braids as a twisted swing provides a good approximation of the elastic response of the molecules as they are intertwined. Comparisons of the experiments and the MC simulations with this analytical model allow for a measurement of the diameter of the braids and its dependence upon entropic and electrostatic repulsive interactions. The MC simulations allow for an estimate of the effective torsional constant of the braids (at a stretching force F = 2 pN): C(b) approximately 48 nm (as compared with C approximately 100 nm for a single unnicked DNA). Finally, at low salt concentrations and for sufficiently large number of braids, the diameter of the braided molecules is observed to collapse to that of double-stranded DNA. We suggest that this collapse is due to the partial melting and fraying of the two nicked molecules and the subsequent right- or left-handed intertwining of the stretched single strands.

  15. Modeling active memory: Experiment, theory and simulation

    NASA Astrophysics Data System (ADS)

    Amit, Daniel J.

    2001-06-01

    Neuro-physiological experiments on cognitively performing primates are described to argue that strong evidence exists for localized, non-ergodic (stimulus specific) attractor dynamics in the cortex. The specific phenomena are delay activity distributions-enhanced spike-rate distributions resulting from training, which we associate with working memory. The anatomy of the relevant cortex region and the physiological characteristics of the participating elements (neural cells) are reviewed to provide a substrate for modeling the observed phenomena. Modeling is based on the properties of the integrate-and-fire neural element in presence of an input current of Gaussian distribution. Theory of stochastic processes provides an expression for the spike emission rate as a function of the mean and the variance of the current distribution. Mean-field theory is then based on the assumption that spike emission processes in different neurons in the network are independent, and hence the input current to a neuron is Gaussian. Consequently, the dynamics of the interacting network is reduced to the computation of the mean and the variance of the current received by a cell of a given population in terms of the constitutive parameters of the network and the emission rates of the neurons in the different populations. Within this logic we analyze the stationary states of an unstructured network, corresponding to spontaneous activity, and show that it can be stable only if locally the net input current of a neuron is inhibitory. This is then tested against simulations and it is found that agreement is excellent down to great detail. A confirmation of the independence hypothesis. On top of stable spontaneous activity, keeping all parameters fixed, training is described by (Hebbian) modification of synapses between neurons responsive to a stimulus and other neurons in the module-synapses are potentiated between two excited neurons and depressed between an excited and a quiescent neuron

  16. Development of FT-NIR models for the simultaneous estimation of chlorophyll and nitrogen content in fresh apple (Malus domestica) leaves.

    PubMed

    Tamburini, Elena; Ferrari, Giuseppe; Marchetti, Maria Gabriella; Pedrini, Paola; Ferro, Sergio

    2015-01-26

    Agricultural practices determine the level of food production and, to great extent, the state of the global environment. During the last decades, the indiscriminate recourse to fertilizers as well as the nitrogen losses from land application have been recognized as serious issues of modern agriculture, globally contributing to nitrate pollution. The development of a reliable Near-Infra-Red Spectroscopy (NIRS)-based method, for the simultaneous monitoring of nitrogen and chlorophyll in fresh apple (Malus domestica) leaves, was investigated on a set of 133 samples, with the aim of estimating the nutritional and physiological status of trees, in real time, cheaply and non-destructively. By means of a FT (Fourier Transform)-NIR instrument, Partial Least Squares (PLS) regression models were developed, spanning a concentration range of 0.577%-0.817% for the total Kjeldahl nitrogen (TKN) content (R2 = 0.983; SEC = 0.012; SEP = 0.028), and of 1.534-2.372 mg/g for the total chlorophyll content (R2 = 0.941; SEC = 0.132; SEP = 0.162). Chlorophyll-a and chlorophyll-b contents were also evaluated (R2 = 0.913; SEC = 0.076; SEP = 0.101 and R2 = 0.899; SEC = 0.059; SEP = 0.101, respectively). All calibration models were validated by means of 47 independent samples. The NIR approach allows a rapid evaluation of the nitrogen and chlorophyll contents, and may represent a useful tool for determining nutritional and physiological status of plants, in order to allow a correction of nutrition programs during the season.

  17. Development of FT-NIR Models for the Simultaneous Estimation of Chlorophyll and Nitrogen Content in Fresh Apple (Malus Domestica) Leaves

    PubMed Central

    Tamburini, Elena; Ferrari, Giuseppe; Marchetti, Maria Gabriella; Pedrini, Paola; Ferro, Sergio

    2015-01-01

    Agricultural practices determine the level of food production and, to great extent, the state of the global environment. During the last decades, the indiscriminate recourse to fertilizers as well as the nitrogen losses from land application have been recognized as serious issues of modern agriculture, globally contributing to nitrate pollution. The development of a reliable Near-Infra-Red Spectroscopy (NIRS)-based method, for the simultaneous monitoring of nitrogen and chlorophyll in fresh apple (Malus domestica) leaves, was investigated on a set of 133 samples, with the aim of estimating the nutritional and physiological status of trees, in real time, cheaply and non-destructively. By means of a FT (Fourier Transform)-NIR instrument, Partial Least Squares (PLS) regression models were developed, spanning a concentration range of 0.577%–0.817% for the total Kjeldahl nitrogen (TKN) content (R2 = 0.983; SEC = 0.012; SEP = 0.028), and of 1.534–2.372 mg/g for the total chlorophyll content (R2 = 0.941; SEC = 0.132; SEP = 0.162). Chlorophyll-a and chlorophyll-b contents were also evaluated (R2 = 0.913; SEC = 0.076; SEP = 0.101 and R2 = 0.899; SEC = 0.059; SEP = 0.101, respectively). All calibration models were validated by means of 47 independent samples. The NIR approach allows a rapid evaluation of the nitrogen and chlorophyll contents, and may represent a useful tool for determining nutritional and physiological status of plants, in order to allow a correction of nutrition programs during the season. PMID:25629703

  18. Collaborative Project: Understanding the Chemical Processes tat Affect Growth rates of Freshly Nucleated Particles

    SciTech Connect

    McMurry, Peter; Smuth, James

    2015-11-12

    This final technical report describes our research activities that have, as the ultimate goal, the development of a model that explains growth rates of freshly nucleated particles. The research activities, which combine field observations with laboratory experiments, explore the relationship between concentrations of gas-phase species that contribute to growth and the rates at which those species are taken up. We also describe measurements of the chemical composition of freshly nucleated particles in a variety of locales, as well as properties (especially hygroscopicity) that influence their effects on climate.

  19. “What Fresh Hell Is This?” Victims of Intimate Partner Violence Describe Their Experiences of Abuse, Pain, and Depression

    PubMed Central

    Cerulli, Catherine; Poleshuck, Ellen; Raimondi, Christina; Veale, Stephanie; Chin, Nancy

    2012-01-01

    Traditionally, professionals working with intimate partner violence (IPV) survivors view a victim through a disciplinary lens, examining health and safety in isolation. Using focus groups with survivors, this study explored the need to address IPV consequences with an integrated model and begin to understand the interconnectedness between violence, health, and safety. Focus group findings revealed that the inscription of pain on the body serves as a reminder of abuse, in turn triggering emotional and psychological pain and disrupting social relationships. In many cases, the physical abuse had stopped but the abuser was relentless by reminding and retraumatizing the victim repeatedly through shared parenting, prolonged court cases, etc. This increased participants’ exhaustion and frustration, making the act of daily living overwhelming. PMID:23226694

  20. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1975-01-01

    The feasibility of extended and long-range weather prediction by means of global atmospheric models was studied. A number of computer experiments were conducted at GISS with the GISS global general circulation model. Topics discussed include atmospheric response to sea-surface temperature anomalies, and monthly mean forecast experiments with the global model.

  1. Nuclear reaction modeling, verification experiments, and applications

    SciTech Connect

    Dietrich, F.S.

    1995-10-01

    This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of experiments in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.

  2. Accelerating the connection between experiments and models: The FACE-MDS experience

    NASA Astrophysics Data System (ADS)

    Norby, R. J.; Medlyn, B. E.; De Kauwe, M. G.; Zaehle, S.; Walker, A. P.

    2014-12-01

    The mandate is clear for improving communication between models and experiments to better evaluate terrestrial responses to atmospheric and climatic change. Unfortunately, progress in linking experimental and modeling approaches has been slow and sometimes frustrating. Recent successes in linking results from the Duke and Oak Ridge free-air CO2 enrichment (FACE) experiments with ecosystem and land surface models - the FACE Model-Data Synthesis (FACE-MDS) project - came only after a period of slow progress, but the experience points the way to future model-experiment interactions. As the FACE experiments were approaching their termination, the FACE research community made an explicit attempt to work together with the modeling community to synthesize and deliver experimental data to benchmark models and to use models to supply appropriate context for the experimental results. Initial problems that impeded progress were: measurement protocols were not consistent across different experiments; data were not well organized for model input; and parameterizing and spinning up models that were not designed for simulating a specific site was difficult. Once these problems were worked out, the FACE-MDS project has been very successful in using data from the Duke and ORNL FACE experiment to test critical assumptions in the models. The project showed, for example, that the stomatal conductance model most widely used in models was supported by experimental data, but models did not capture important responses such as increased leaf mass per unit area in elevated CO2, and did not appropriately represent foliar nitrogen allocation. We now have an opportunity to learn from this experience. New FACE experiments that have recently been initiated, or are about to be initiated, include a eucalyptus forest in Australia; the AmazonFACE experiment in a primary, tropical forest in Brazil; and a mature oak woodland in England. Cross-site science questions are being developed that will have a

  3. Modeling the Classic Meselson and Stahl Experiment.

    ERIC Educational Resources Information Center

    D'Agostino, JoBeth

    2001-01-01

    Points out the importance of molecular models in biology and chemistry. Presents a laboratory activity on DNA. Uses different colored wax strips to represent "heavy" and "light" DNA, cesium chloride for identification of small density differences, and three different liquids with varying densities to model gradient…

  4. Experience With Bayesian Image Based Surface Modeling

    NASA Technical Reports Server (NTRS)

    Stutz, John C.

    2005-01-01

    Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.

  5. Fundamental efficiency of nanothermophones: modeling and experiments.

    PubMed

    Vesterinen, V; Niskanen, A O; Hassel, J; Helistö, P

    2010-12-08

    Scaling down the dimensions of thermoacoustic sound sources (thermophones) improves efficiency by means of reducing speaker heat capacity. Recent experiments with nanoscale thermophones have revealed properties which are not fully understood theoretically. We develop a Green's function formalism which quantitatively explains some observed discrepancies, e.g., the effect of a heat-absorbing substrate in the proximity of the sound source. We also find a generic ultimate limit for thermophone efficiency. We verify the theory with experiments and finite difference method simulations which deal with thermoacoustically operated suspended arrays of nanowires. The efficiency of our devices is measured to be 1 order of magnitude below the ultimate bound. At low frequencies this mainly results from the presence of a substrate. At high frequencies, on the other hand, the efficiency is limited by the heat capacity of the nanowires. Measured sound pressure level and efficiency are in good agreement with simulations. We discuss the feasibility of reaching the ultimate limit in practice.

  6. Annual modulation experiments, galactic models and WIMPs

    NASA Astrophysics Data System (ADS)

    Hudson, Robert G.

    Our task in the paper is to examine some recent experiments (in the period 1996-2002) bearing on the issue of whether there is dark matter in the universe in the form of neutralino WIMPs (weakly interacting massive particles). Our main focus is an experiment performed by the DAMA group that claims to have found an 'annual modulation signature' for the WIMP. DAMA's result has been hotly contested by two other groups, EDELWEISS and CDMS, and we study the details of the experiments performed by all three groups. Our goal is to investigate the philosophic and sociological implications of this controversy. Particularly, using an innovative theoretical strategy suggested by (Copi, C. and L. M. Krauss (2003). Comparing interaction rate detectors for weakly interacting massive particles with annual modulation detectors. Physical Review D, 67, 103 507), we suggest a new way of resolving discordant experimental data (extending a previous analysis by (Franklin, A. (2002). Selectivity and discord. Pittsburgh: University of Pittsburgh Press). In addition, we are in a position to contribute substantively to the debate between realists and constructive empiricists. Finally, from a sociological standpoint, we remark that DAMA's work has been valuable in mobilizing other research teams and providing them with a critical focus.

  7. Vortex microscope: analytical model and experiment

    NASA Astrophysics Data System (ADS)

    Masajada, Jan; Popiołek-Masajada, Agnieszka; Szatkowski, Mateusz; Plociniczak, Łukasz

    2015-11-01

    We present the analytical model describing the Gaussian beam propagation through the off axis vortex lens and the set of axially positioned ideal lenses. The model is derived on the base of Fresnel diffraction integral. The model is extended to the case of vortex lens with any topological charge m. We have shown that the Gaussian beam propagation can be represented by function G which depends on four coefficients. When propagating from one lens to another the function holds its form but the coefficient changes.

  8. Modeling of modification experiments involving neutral-gas release

    SciTech Connect

    Bernhardt, P.A.

    1983-01-01

    Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools.

  9. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  10. Silicon Carbide Derived Carbons: Experiments and Modeling

    SciTech Connect

    Kertesz, Miklos

    2011-02-28

    The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.

  11. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  12. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  13. Model localization experiments on a ribbed antenna

    NASA Technical Reports Server (NTRS)

    Levine-West, M. B.; Salama, M. A.

    1992-01-01

    In this paper, the model localization (ML) phenomenon is investigated experimentally and analytically to determine the influence of its parameters. For this purpose, a full-scale 12-rib loosely-coupled antenna testbed with small imperfections is dynamically tested for various levels of inter-rib coupling stiffness and excitation force. The experimental results are described herein. Using a simplified numerical model of the structure, a sensitivity analysis of the modal behavior is also performed. The numerical and experimental results are shown to agree remarkably well, thereby providing conclusive validation of the ML phenomenon on a testbed having the dynamic characteristics of space structures.

  14. Computational Model of Fluorine-20 Experiment

    NASA Astrophysics Data System (ADS)

    Chuna, Thomas; Voytas, Paul; George, Elizabeth; Naviliat-Cuncic, Oscar; Gade, Alexandra; Hughes, Max; Huyan, Xueying; Liddick, Sean; Minamisono, Kei; Weisshaar, Dirk; Paulauskas, Stanley; Ban, Gilles; Flechard, Xavier; Lienard, Etienne

    2015-10-01

    The Conserved Vector Current (CVC) hypothesis of the standard model of the electroweak interaction predicts there is a contribution to the shape of the spectrum in the beta-minus decay of 20F related to a property of the analogous gamma decay of excited 20Ne. To provide a strong test of the CVC hypothesis, a precise measurement of the 20F beta decay spectrum will be taken at the National Superconducting Cyclotron Laboratory. This measurement uses unconventional measurement techniques in that 20F will be implanted directly into a scintillator. As the emitted electrons interact with the detector material, bremsstrahlung interactions occur and the escape of the resultant photons will distort the measured spectrum. Thus, a Monte Carlo simulation has been constructed using EGSnrc radiation transport software. This computational model's intended use is to quantify and correct for distortion in the observed beta spectrum due, primarily, to the aforementioned bremsstrahlung. The focus of this presentation is twofold: the analysis of the computational model itself and the results produced by the model. Wittenberg University.

  15. Plasma gun pellet acceleration modeling and experiment

    SciTech Connect

    Kincaid, R.W.; Bourham, M.A.; Gilligan, J.G.

    1996-12-31

    Modifications to the electrothermal plasma gun SIRENS have been completed to allow for acceleration experiments using plastic pellets. Modifications have been implemented to the 1-D, time dependent code ODIN to include pellet friction, momentum, and kinetic energy with options of variable barrel length. The code results in the new version, POSEIDON, compare favorably with experimental data and with code results from ODIN. Predicted values show an increased pellet velocity along the barrel length, achieving 2 km/s exit velocity. Measured velocity, at three locations along the barrel length, showed good correlation with predicted values. The code has also been used to investigate the effectiveness of longer pulse length on pellet velocity using simulated ramp up and down currents with flat top, and triangular current pulses with early and late peaking. 16 refs., 5 figs.

  16. Model experiments of superlubricity of graphite

    NASA Astrophysics Data System (ADS)

    Dienwiebel, Martin; Pradeep, Namboodiri; Verhoeven, Gertjan S.; Zandbergen, Henny W.; Frenken, Joost W. M.

    2005-02-01

    Graphite is known to be a good solid lubricant. The low-friction behavior is traditionally ascribed to the low resistance to shear. We have recently observed that the ultra-low friction found in friction force microscopy experiments on graphite is due to a effect called superlubricity [M. Dienwiebel, G. S. Verhoeven, N. Pradeep, J.W.M. Frenken, J.A. Heimberg, H.W. Zandbergen, Phys. Rev. Lett. 92 (2004) 126101]. Here, we provide additional experimental evidence that superlubricity has been taken place between a small graphite flake attached to the scanning tip and the graphite surface. Finally, we speculate about the significance of this for the lubricating properties of graphite.

  17. High precision modeling for fundamental physics experiments

    NASA Astrophysics Data System (ADS)

    Rievers, Benny; Nesemann, Leo; Costea, Adrian; Andres, Michael; Stephan, Ernst P.; Laemmerzahl, Claus

    With growing experimental accuracies and high precision requirements for fundamental physics space missions the needs for accurate numerical modeling techniques are increasing. Motivated by the challenge of length stability in cavities and optical resonators we propose the develop-ment of a high precision modeling tool for the simulation of thermomechanical effects up to a numerical precision of 10-20 . Exemplary calculations for simplified test cases demonstrate the general feasibility of high precision calculations and point out the high complexity of the task. A tool for high precision analysis of complex geometries will have to use new data types, advanced FE solver routines and implement new methods for the evaluation of numerical precision.

  18. Modeling and analysis of pinhole occulter experiment

    NASA Technical Reports Server (NTRS)

    Ring, J. R.

    1986-01-01

    The objectives were to improve pointing control system implementation by converting the dynamic compensator from a continuous domain representation to a discrete one; to determine pointing stability sensitivites to sensor and actuator errors by adding sensor and actuator error models to treetops and by developing an error budget for meeting pointing stability requirements; and to determine pointing performance for alternate mounting bases (space station for example).

  19. Flexible robot control: Modeling and experiments

    NASA Technical Reports Server (NTRS)

    Oppenheim, Irving J.; Shimoyama, Isao

    1989-01-01

    Described here is a model and its use in experimental studies of flexible manipulators. The analytical model uses the equivalent of Rayleigh's method to approximate the displaced shape of a flexible link as the static elastic displacement which would occur under end rotations as applied at the joints. The generalized coordinates are thereby expressly compatible with joint motions and rotations in serial link manipulators, because the amplitude variables are simply the end rotations between the flexible link and the chord connecting the end points. The equations for the system dynamics are quite simple and can readily be formulated for the multi-link, three-dimensional case. When the flexible links possess mass and (polar moment of) inertia which are small compared to the concentrated mass and inertia at the joints, the analytical model is exact and displays the additional advantage of reduction in system dimension for the governing equations. Four series of pilot tests have been completed. Studies on a planar single-link system were conducted at Carnegie-Mellon University, and tests conducted at Toshiba Corporation on a planar two-link system were then incorporated into the study. A single link system under three-dimensional motion, displaying biaxial flexure, was then tested at Carnegie-Mellon.

  20. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1993-01-01

    The main goals of the research under this grant consist of the development of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular, for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). Of the tasks described in detail in the original proposal, two remain to be worked on: (1) development of a spectral code for moving boundary problems; and (2) diffusivity measurements on concentrated and supersaturated TGS solutions. Progress made during this seventh half-year period is reported.

  1. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1992-01-01

    The development is examined of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). The tasks include development of a spectral code for moving boundary problems, kinematic viscosity measurements on liquid MCT at temperatures close to the melting point, and diffusivity measurements on concentrated and supersaturated TGS solutions. A detailed description is given of the work performed for these tasks, together with a summary of the resulting publications and presentations.

  2. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1993-01-01

    The main goals of the research consist of the development of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and diffusivity measurements on concentrated and supersaturated TGS solutions. During this eighth half-year period, good progress was made on these tasks.

  3. Characterizing nanoparticle interactions: Linking models to experiments

    SciTech Connect

    Ramakrishnan, S.; Zukoski, C. F.

    2000-07-15

    Self-assembly of nanoparticles involves manipulating particle interactions such that attractions are on the order of the average thermal energy in the system. If the self-assembly is to result in an ordered packing, an understanding of their phase behavior is necessary. Here we test the ability of simple pair potentials to characterize the interactions and phase behavior of silico tungstic acid (STA), a 1.2 nm particle. The strength of interaction is controlled by dispersing STA in different background salt concentrations. The experimental variables used in characterizing the interactions are the osmotic compressibility (d{pi}/d{rho}), the second virial coefficient (B{sub 2}), relative solution viscosity ({eta}/{eta}{sub c}), and the solubility ({rho}{sigma}{sup 3}){sub sat}. Various techniques are then developed to extract the parameters of square well, the adhesive hard sphere (AHS), and the Yukawa pair potentials that best describe the experimental data. The AHS model describes the solution thermodynamic behavior only where the system is weakly attractive but, as would be expected, fails when long range repulsions or nonmonotonic pair potentials become important. Model free representations are presented which offer the opportunity to extract pair potential parameters. (c) 2000 American Institute of Physics.

  4. Indian Consortia Models: FORSA Libraries' Experiences

    NASA Astrophysics Data System (ADS)

    Patil, Y. M.; Birdie, C.; Bawdekar, N.; Barve, S.; Anilkumar, N.

    2007-10-01

    With increases in prices of journals, shrinking library budgets and cuts in subscriptions to journals over the years, there has been a big challenge facing Indian library professionals to cope with the proliferation of electronic information resources. There have been sporadic efforts by different groups of libraries in forming consortia at different levels. The types of consortia identified are generally based on various models evolved in India in a variety of forms depending upon the participants' affiliations and funding sources. Indian astronomy library professionals have formed a group called Forum for Resource Sharing in Astronomy and Astrophysics (FORSA), which falls under `Open Consortia', wherein participants are affiliated to different government departments. This is a model where professionals willingly come forward and actively support consortia formation; thereby everyone benefits. As such, FORSA has realized four consortia, viz. Nature Online Consortium; Indian Astrophysics Consortium for physics/astronomy journals of Springer/Kluwer; Consortium for Scientific American Online Archive (EBSCO); and Open Consortium for Lecture Notes in Physics (Springer), which are discussed briefly.

  5. Experiments and Valve Modelling in Thermoacoustic Device

    NASA Astrophysics Data System (ADS)

    Duthil, P.; Baltean Carlès, D.; Bétrancourt, A.; François, M. X.; Yu, Z. B.; Thermeau, J. P.

    2006-04-01

    In a so called heat driven thermoacoustic refrigerator, using either a pulse tube or a lumped boost configuration, heat pumping is induced by Stirling type thermodynamic cycles within the regenerator. The time phase between acoustic pressure and flow rate throughout must then be close to that met for a purely progressive wave. The study presented here relates the experimental characterization of passive elements such as valves, tubes and tanks which are likely to act on this phase relationship when included in the propagation line of the wave resonator. In order to carry out a characterization — from the acoustic point of view — of these elements, systematic measurements of the acoustic field are performed varying various parameters: mean pressure, oscillations frequency, supplied heat power. Acoustic waves are indeed generated by use of a thermoacoustic prime mover driving a pulse tube refrigerator. The experimental results are then compared with the solutions obtained with various one-dimensional linear models including non linear correction factors. It turns out that when using a non symmetrical valve, and for large dissipative effects, the measurements disagree with the linear modelling and non linear behaviour of this particular element is shown.

  6. MHD Models and Laboratory Experiments of Jets

    NASA Astrophysics Data System (ADS)

    Gardiner, T. A.; Frank, A.; Blackman, E. G.; Lebedev, S. V.; Chittenden, J. P.; Ampleford, D.; Bland, S. N.; Ciardi, A.; Sherlock, M.; Haines, M. G.

    Jet research has long relied upon a combination of analytical, observational and numerical studies to elucidate the complex phenomena involved. One element missing from these studies (which other physical sciences utilize) is the controlled experimental investigation of such systems. With the advent of high-power lasers and fast Z-pinch machines it is now possible to experimentally study similar systems in a laboratory setting. Such investigations can contribute in two useful ways. They can be used for comparison with numerical simulations as a means to validate simulation codes. More importantly, however, such investigations can also be used to complement other jet research, leading to fundamentally new knowledge. In the first part of this article, we analyze the evolution of magnetized wide-angle winds in a collapsing environment. We track the ambient and wind mass separately and describe a physical mechanism by which an ionized central wind can entrain the ambient gas giving rise to internal shells of molecular material on short time scales. The formation of internal shells in molecular outflows has been found to be an important ingredient in describing the observations of convex spurs in P-V diagrams (Hubble wedges in M-V diagrams). In the second part, we present astrophysically relevant experiments in which supersonic jets are created using a conical wire array Z-pinch. The conically convergent flow generates a standing shock around the axis which collimates the flow into a Mach ~ 30 jet. The jet formation process is closely related to the work of Cantó et al. (1988) for hydrodynamic jet collimation. The influence of radiative cooling on collimation and stability is studied by varying the wire material (Al, Fe, and W).

  7. Process modelling for space station experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1988-01-01

    The work performed during the first year 1 Oct. 1987 to 30 Sept. 1988 involved analyses of crystal growth from the melt and from solution. The particular melt growth technique under investigation is directional solidification by the Bridgman-Stockbarger method. Two types of solution growth systems are also being studied. One involves growth from solution in a closed container, the other concerns growth of protein crystals by the hanging drop method. Following discussions with Dr. R. J. Naumann of the Low Gravity Science Division at MSFC it was decided to tackle the analysis of crystal growth from the melt earlier than originally proposed. Rapid progress was made in this area. Work is on schedule and full calculations were underway for some time. Progress was also made in the formulation of the two solution growth models.

  8. Differential equation modeling of HIV viral fitness experiments: model identification, model selection, and multimodel inference.

    PubMed

    Miao, Hongyu; Dykes, Carrie; Demeter, Lisa M; Wu, Hulin

    2009-03-01

    Many biological processes and systems can be described by a set of differential equation (DE) models. However, literature in statistical inference for DE models is very sparse. We propose statistical estimation, model selection, and multimodel averaging methods for HIV viral fitness experiments in vitro that can be described by a set of nonlinear ordinary differential equations (ODE). The parameter identifiability of the ODE models is also addressed. We apply the proposed methods and techniques to experimental data of viral fitness for HIV-1 mutant 103N. We expect that the proposed modeling and inference approaches for the DE models can be widely used for a variety of biomedical studies.

  9. Integration of User Profiles: Models and Experiments in Information Retrieval.

    ERIC Educational Resources Information Center

    Myaeng, Sung H.; Korfhage, Robert R.

    1990-01-01

    Discussion of the interpretation of user queries in information retrieval highlights theoretical models that utilize user characteristics maintained in the form of a user profile. Various query/profile interaction models are identified, and an experiment is described that tested the relevance of retrieved documents based on various models. (29…

  10. FIELD EXPERIMENTS AND MODELING AT CDG AIRPORTS

    NASA Astrophysics Data System (ADS)

    Ramaroson, R.

    2009-12-01

    Richard Ramaroson1,4, Klaus Schaefer2, Stefan Emeis2, Carsten Jahn2, Gregor Schürmann2, Maria Hoffmann2, Mikhael Zatevakhin3, Alexandre Ignatyev3. 1ONERA, Châtillon, France; 4SEAS, Harvard University, Cambridge, USA; 2FZK, Garmisch, Germany; (3)FSUE SPbAEP, St Petersburg, Russia. 2-month field campaigns have been organized at CDG airports in autumn 2004 and summer 2005. Air quality and ground air traffic emissions have been monitored continuously at terminals and taxi-runways, along with meteorological parameters onboard trucks and with a SODAR. This paper analyses the commercial engine emissions characteristics at airports and their effects on gas pollutants and airborne particles coupled to meteorology. LES model results for PM dispersion coupled to microphysics in the PBL are compared to measurements. Winds and temperature at the surface and their vertical profiles have been stored with turbulence. SODAR observations show the time-development of the mixing layer depth and turbulent mixing in summer up to 800m. Active low level jets and their regional extent have been observed and analyzed. PM number and mass size distribution, morphology and chemical contents are investigated. Formation of new ultra fine volatile (UFV) particles in the ambient plume downstream of running engines is observed. Soot particles are mostly observed at significant level at high power thrusts at take-off (TO) and on touch-down whereas at lower thrusts at taxi and aprons ultra the UFV PM emissions become higher. Ambient airborne PM1/2.5 is closely correlated to air traffic volume and shows a maximum beside runways. PM number distribution at airports is composed mainly by volatile UF PM abundant at apron. Ambient PM mass in autumn is higher than in summer. The expected differences between TO and taxi emissions are confirmed for NO, NO2, speciated VOC and CO. NO/NO2 emissions are larger at runways due to higher power. Reactive VOC and CO are more produced at low powers during idling at

  11. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1994-01-01

    The main goals of the research under this grant consist of the development of mathematical tools and measurement techniques for transport properties necessary for high fidelity modelling of crystal growth from the melt and solution. Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and development of an expedient diffusivity measurement technique for concentrated and supersaturated solutions. We have focused on developing a code to solve for interface shape, heat and species transport during directional solidification. The work involved the computation of heat, mass and momentum transfer during Bridgman-Stockbarger solidification of compound semiconductors. Domain decomposition techniques and preconditioning methods were used in conjunction with Chebyshev spectral methods to accelerate convergence while retaining the high-order spectral accuracy. During the report period we have further improved our experimental setup. These improvements include: temperature control of the measurement cell to 0.1 C between 10 and 60 C; enclosure of the optical measurement path outside the ZYGO interferometer in a metal housing that is temperature controlled to the same temperature setting as the measurement cell; simultaneous dispensing and partial removal of the lower concentration (lighter) solution above the higher concentration (heavier) solution through independently motor-driven syringes; three-fold increase in data resolution by orientation of the interferometer with respect to diffusion direction; and increase of the optical path length in the solution cell to 12 mm.

  12. Laboratory modeling of ionospheric heating experiments

    NASA Astrophysics Data System (ADS)

    Starodubtsev, M. V.; Nazarov, V. V.; Gushchin, M. E.; Kostrov, A. V.

    2016-10-01

    Turbulent plasma processes, such as those which occur in the Earth's ionosphere during ionospheric heating by powerful radio waves, were studied under laboratory conditions, and new physical models of small-scale ionospheric turbulence are proposed as a result of these studies. It is shown here that the mechanism of small-scale plasma filamentation can be connected with the thermal self-channeling of Langmuir waves. During this process, Langmuir waves are guided by a plasma channel, which in turn is formed by the guided waves through a thermal plasma nonlinearity. The spectrum of the self-guided Langmuir waves exhibits sidebands whose features are similar to stimulated electromagnetic emission. We present two mechanisms of sideband generation. The first mechanism can be observed during the formation of the plasma channel and is connected with the parametric shift in the frequency of the self-channeling wave. The second mechanism is connected with the scattering of the self-channeling wave on the low-frequency eigenmodes of the plasma irregularity.

  13. Optimal experiment design for model selection in biochemical networks

    PubMed Central

    2014-01-01

    Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498

  14. A Community Mentoring Model for STEM Undergraduate Research Experiences

    ERIC Educational Resources Information Center

    Kobulnicky, Henry A.; Dale, Daniel A.

    2016-01-01

    This article describes a community mentoring model for UREs that avoids some of the common pitfalls of the traditional paradigm while harnessing the power of learning communities to provide young scholars a stimulating collaborative STEM research experience.

  15. Underwater Blast Experiments and Modeling for Shock Mitigation

    SciTech Connect

    Glascoe, L; McMichael, L; Vandersall, K; Margraf, J

    2010-03-07

    A simple but novel mitigation concept to enforce standoff distance and reduce shock loading on a vertical, partially-submerged structure is evaluated using scaled aquarium experiments and numerical modeling. Scaled, water tamped explosive experiments were performed using three gallon aquariums. The effectiveness of different mitigation configurations, including air-filled media and an air gap, is assessed relative to an unmitigated detonation using the same charge weight and standoff distance. Experiments using an air-filled media mitigation concept were found to effectively dampen the explosive response of the aluminum plate and reduce the final displacement at plate center by approximately half. The finite element model used for the initial experimental design compares very well to the experimental DIC results both spatially and temporally. Details of the experiment and finite element aquarium models are described including the boundary conditions, Eulerian and Lagrangian techniques, detonation models, experimental design and test diagnostics.

  16. Applying modeling Results in designing a global tropospheric experiment

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A set of field experiments and advanced modeling studies which provide a strategy for a program of global tropospheric experiments was identified. An expanded effort to develop space applications for trospheric air quality monitoring and studies was recommended. The tropospheric ozone, carbon, nitrogen, and sulfur cycles are addressed. Stratospheric-tropospheric exchange is discussed. Fast photochemical processes in the free troposphere are considered.

  17. Granular Medium Impacted by a Projectile: Experiment and Model

    NASA Astrophysics Data System (ADS)

    Crassous, J.; Valance, A.

    2009-06-01

    We present an experiment and a simple model of a granular projectile on a granular medium. Experiment consists in impacting an half space of PVC beads with a single bead. Numerous beads are then ejected around the impact point. The loci of ejection and velocities of the ejecta were measured. The experimental data were compared with the predictions of a simple discrete model. In this model, the energy is transferred from grain to grain on a frozen disordered medium following the law of binary collisions. This theoretical description is in remarkable agreement with the experimental observations. Besides, the present model provides a clear picture of the mechanism of energy propagation.

  18. Engineering teacher training models and experiences

    NASA Astrophysics Data System (ADS)

    González-Tirados, R. M.

    2009-04-01

    Education Area, we renewed the programme, content and methodology, teaching the course under the name of "Initial Teacher Training Course within the framework of the European Higher Education Area". Continuous Training means learning throughout one's life as an Engineering teacher. They are actions designed to update and improve teaching staff, and are systematically offered on the current issues of: Teaching Strategies, training for research, training for personal development, classroom innovations, etc. They are activities aimed at conceptual change, changing the way of teaching and bringing teaching staff up-to-date. At the same time, the Institution is at the disposal of all teaching staff as a meeting point to discuss issues in common, attend conferences, department meetings, etc. In this Congress we present a justification of both training models and their design together with some results obtained on: training needs, participation, how it is developing and to what extent students are profiting from it.

  19. Foods - fresh vs. frozen or canned

    MedlinePlus

    Frozen foods vs. fresh or canned; Fresh foods vs. frozen or canned; Frozen vegetables versus fresh ... a well-balanced diet. Many people wonder if frozen and canned vegetables are as healthy for you ...

  20. A Fresh Approach

    ERIC Educational Resources Information Center

    Violino, Bob

    2011-01-01

    Facilities and services are a huge drain on community college budgets. They are also vital to the student experience. As funding dries up across the country, many institutions are taking a team approach, working with partner colleges and private service providers to offset costs and generate revenue without sacrificing the services and amenities…

  1. Reaction of seawater with fresh mid-ocean ridge gabbro creates ';atypical' REE pattern and high REE fluid fluxes: Experiments at 425 and 475 °C, 400 and 1000 bar

    NASA Astrophysics Data System (ADS)

    Beermann, O.; Garbe-Schönberg, D.; Holzheid, A. D.

    2013-12-01

    High-temperature MOR hydrothermalism significantly affects ocean chemistry. The Sisters Peak (SP) hydrothermal field at 5°S on the slow-spreading Mid-Atlantic Ridge (MAR) emanates fluids >400°C [1] that have high concentrations of H2, transition metals, and rare earth elements (REE) exhibiting ';atypical' REE pattern characterized by depletions of LREE and HREE relative to MREE and no Eu anomaly [2]. This is in contrast to the ';typical' LREE enrichment and strong positive Eu anomaly known from many MOR vent fluids observed world-wide [e.g., 3]. Besides temperature, the seawater-to-rock ratio (w/r ratio) has significant control on the fluid chemistry [e.g., 4, 5]. To understand how vent fluid REE-signatures are generated during water-rock interaction processes we reacted unaltered gabbro with natural bottom seawater at 425 °C and 400 bar and at 425 and 475 °C at 1000 bar at variable w/r (mass) ratios ranging from 0.5-10 by using cold seal pressure vessels (CSPV). The run durations varied from 3-72 h. Reacted fluids were analysed for major and trace elements by ICP-OES and ICP-MS. In our experiments, ';atypical' REE fluid pattern similar to those of SP fluids were obtained at high w/r ratio (5 and 10) that might be characteristic for focused fluid-flow along e.g., detachment faults at slow-spreading MOR [6]. In contrast, more ';typical'-like REE pattern with elevated LREE and slightly positive Eu anomalies have been reproduced at low w/r ratio (0.5-1). Results of numerical simulations imply that strong positive Eu anomalies of fluids and altered gabbro from high temperature MOR hydrothermal systems can be created by intense rock leaching processes at high w/r ratio (5-10). This suggests that hydrothermal circulation through the ocean crust creates ';typical' REE fluid pattern with strong positive Eu anomalies if seawater reacts with gabbroic host rock that has been already leached in REE at high fluid fluxes. Simulations of the temporal chemical evolution of

  2. Model validation for karst flow using sandbox experiments

    NASA Astrophysics Data System (ADS)

    Ye, M.; Pacheco Castro, R. B.; Tao, X.; Zhao, J.

    2015-12-01

    The study of flow in karst is complex due of the high heterogeneity of the porous media. Several approaches have been proposed in the literature to study overcome the natural complexity of karst. Some of those methods are the single continuum, double continuum and the discrete network of conduits coupled with the single continuum. Several mathematical and computing models are available in the literature for each approach. In this study one computer model has been selected for each category to validate its usefulness to model flow in karst using a sandbox experiment. The models chosen are: Modflow 2005, Modflow CFPV1 and Modflow CFPV2. A sandbox experiment was implemented in such way that all the parameters required for each model can be measured. The sandbox experiment was repeated several times under different conditions. The model validation will be carried out by comparing the results of the model simulation and the real data. This model validation will allows ud to compare the accuracy of each model and the applicability in Karst. Also we will be able to evaluate if the results of the complex models improve a lot compared to the simple models specially because some models require complex parameters that are difficult to measure in the real world.

  3. Arctic pathways of Pacific Water: Arctic Ocean Model Intercomparison experiments.

    PubMed

    Aksenov, Yevgeny; Karcher, Michael; Proshutinsky, Andrey; Gerdes, Rüdiger; de Cuevas, Beverly; Golubeva, Elena; Kauker, Frank; Nguyen, An T; Platov, Gennady A; Wadley, Martin; Watanabe, Eiji; Coward, Andrew C; Nurser, A J George

    2016-01-01

    Pacific Water (PW) enters the Arctic Ocean through Bering Strait and brings in heat, fresh water, and nutrients from the northern Bering Sea. The circulation of PW in the central Arctic Ocean is only partially understood due to the lack of observations. In this paper, pathways of PW are investigated using simulations with six state-of-the art regional and global Ocean General Circulation Models (OGCMs). In the simulations, PW is tracked by a passive tracer, released in Bering Strait. Simulated PW spreads from the Bering Strait region in three major branches. One of them starts in the Barrow Canyon, bringing PW along the continental slope of Alaska into the Canadian Straits and then into Baffin Bay. The second begins in the vicinity of the Herald Canyon and transports PW along the continental slope of the East Siberian Sea into the Transpolar Drift, and then through Fram Strait and the Greenland Sea. The third branch begins near the Herald Shoal and the central Chukchi shelf and brings PW into the Beaufort Gyre. In the models, the wind, acting via Ekman pumping, drives the seasonal and interannual variability of PW in the Canadian Basin of the Arctic Ocean. The wind affects the simulated PW pathways by changing the vertical shear of the relative vorticity of the ocean flow in the Canada Basin.

  4. Arctic pathways of Pacific Water: Arctic Ocean Model Intercomparison experiments

    PubMed Central

    Karcher, Michael; Proshutinsky, Andrey; Gerdes, Rüdiger; de Cuevas, Beverly; Golubeva, Elena; Kauker, Frank; Nguyen, An T.; Platov, Gennady A.; Wadley, Martin; Watanabe, Eiji; Coward, Andrew C.; Nurser, A. J. George

    2016-01-01

    Abstract Pacific Water (PW) enters the Arctic Ocean through Bering Strait and brings in heat, fresh water, and nutrients from the northern Bering Sea. The circulation of PW in the central Arctic Ocean is only partially understood due to the lack of observations. In this paper, pathways of PW are investigated using simulations with six state‐of‐the art regional and global Ocean General Circulation Models (OGCMs). In the simulations, PW is tracked by a passive tracer, released in Bering Strait. Simulated PW spreads from the Bering Strait region in three major branches. One of them starts in the Barrow Canyon, bringing PW along the continental slope of Alaska into the Canadian Straits and then into Baffin Bay. The second begins in the vicinity of the Herald Canyon and transports PW along the continental slope of the East Siberian Sea into the Transpolar Drift, and then through Fram Strait and the Greenland Sea. The third branch begins near the Herald Shoal and the central Chukchi shelf and brings PW into the Beaufort Gyre. In the models, the wind, acting via Ekman pumping, drives the seasonal and interannual variability of PW in the Canadian Basin of the Arctic Ocean. The wind affects the simulated PW pathways by changing the vertical shear of the relative vorticity of the ocean flow in the Canada Basin. PMID:27818853

  5. Arctic pathways of Pacific Water: Arctic Ocean Model Intercomparison experiments

    NASA Astrophysics Data System (ADS)

    Aksenov, Yevgeny; Karcher, Michael; Proshutinsky, Andrey; Gerdes, Rüdiger; de Cuevas, Beverly; Golubeva, Elena; Kauker, Frank; Nguyen, An T.; Platov, Gennady A.; Wadley, Martin; Watanabe, Eiji; Coward, Andrew C.; Nurser, A. J. George

    2016-01-01

    Pacific Water (PW) enters the Arctic Ocean through Bering Strait and brings in heat, fresh water, and nutrients from the northern Bering Sea. The circulation of PW in the central Arctic Ocean is only partially understood due to the lack of observations. In this paper, pathways of PW are investigated using simulations with six state-of-the art regional and global Ocean General Circulation Models (OGCMs). In the simulations, PW is tracked by a passive tracer, released in Bering Strait. Simulated PW spreads from the Bering Strait region in three major branches. One of them starts in the Barrow Canyon, bringing PW along the continental slope of Alaska into the Canadian Straits and then into Baffin Bay. The second begins in the vicinity of the Herald Canyon and transports PW along the continental slope of the East Siberian Sea into the Transpolar Drift, and then through Fram Strait and the Greenland Sea. The third branch begins near the Herald Shoal and the central Chukchi shelf and brings PW into the Beaufort Gyre. In the models, the wind, acting via Ekman pumping, drives the seasonal and interannual variability of PW in the Canadian Basin of the Arctic Ocean. The wind affects the simulated PW pathways by changing the vertical shear of the relative vorticity of the ocean flow in the Canada Basin.

  6. Dynamic modelling and analysis of biochemical networks: mechanism-based models and model-based experiments.

    PubMed

    van Riel, Natal A W

    2006-12-01

    Systems biology applies quantitative, mechanistic modelling to study genetic networks, signal transduction pathways and metabolic networks. Mathematical models of biochemical networks can look very different. An important reason is that the purpose and application of a model are essential for the selection of the best mathematical framework. Fundamental aspects of selecting an appropriate modelling framework and a strategy for model building are discussed. Concepts and methods from system and control theory provide a sound basis for the further development of improved and dedicated computational tools for systems biology. Identification of the network components and rate constants that are most critical to the output behaviour of the system is one of the major problems raised in systems biology. Current approaches and methods of parameter sensitivity analysis and parameter estimation are reviewed. It is shown how these methods can be applied in the design of model-based experiments which iteratively yield models that are decreasingly wrong and increasingly gain predictive power.

  7. Investigation of models for large scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1982-01-01

    Long-range numerical prediction and climate simulation experiments with various global atmospheric general circulation models are reported. A chronological listing of the titles of all publications and technical reports already distributed is presented together with an account of the most recent reseach. Several reports on a series of perpetual January climate simulations with the GISS coarse mesh climate model are listed. A set of perpetual July climate simulations with the same model is presented and the results are described.

  8. Numerical Simulation and Cold Modeling experiments on Centrifugal Casting

    NASA Astrophysics Data System (ADS)

    Keerthiprasad, Kestur Sadashivaiah; Murali, Mysore Seetharam; Mukunda, Pudukottah Gopaliengar; Majumdar, Sekhar

    2011-02-01

    In a centrifugal casting process, the fluid flow eventually determines the quality and characteristics of the final product. It is difficult to study the fluid behavior here because of the opaque nature of melt and mold. In the current investigation, numerical simulations of the flow field and visualization experiments on cold models have been carried out for a centrifugal casting system using horizontal molds and fluids of different viscosities to study the effect of different process variables on the flow pattern. The effects of the thickness of the cylindrical fluid annulus formed inside the mold and the effects of fluid viscosity, diameter, and rotational speed of the mold on the hollow fluid cylinder formation process have been investigated. The numerical simulation results are compared with corresponding data obtained from the cold modeling experiments. The influence of rotational speed in a real-life centrifugal casting system has also been studied using an aluminum-silicon alloy. Cylinders of different thicknesses are cast at different rotational speeds, and the flow patterns observed visually in the actual castings are found to be similar to those recorded in the corresponding cold modeling experiments. Reasonable agreement is observed between the results of numerical simulation and the results of cold modeling experiments with different fluids. The visualization study on the hollow cylinders produced in an actual centrifugal casting process also confirm the conclusions arrived at from the cold modeling experiments and numerical simulation in a qualitative sense.

  9. Postharvest treatments of fresh produce

    PubMed Central

    Mahajan, P. V.; Caleb, O. J.; Singh, Z.; Watkins, C. B.; Geyer, M.

    2014-01-01

    Postharvest technologies have allowed horticultural industries to meet the global demands of local and large-scale production and intercontinental distribution of fresh produce that have high nutritional and sensory quality. Harvested products are metabolically active, undergoing ripening and senescence processes that must be controlled to prolong postharvest quality. Inadequate management of these processes can result in major losses in nutritional and quality attributes, outbreaks of foodborne pathogens and financial loss for all players along the supply chain, from growers to consumers. Optimal postharvest treatments for fresh produce seek to slow down physiological processes of senescence and maturation, reduce/inhibit development of physiological disorders and minimize the risk of microbial growth and contamination. In addition to basic postharvest technologies of temperature management, an array of others have been developed including various physical (heat, irradiation and edible coatings), chemical (antimicrobials, antioxidants and anti-browning) and gaseous treatments. This article examines the current status on postharvest treatments of fresh produce and emerging technologies, such as plasma and ozone, that can be used to maintain quality, reduce losses and waste of fresh produce. It also highlights further research needed to increase our understanding of the dynamic response of fresh produce to various postharvest treatments. PMID:24797137

  10. Postharvest treatments of fresh produce.

    PubMed

    Mahajan, P V; Caleb, O J; Singh, Z; Watkins, C B; Geyer, M

    2014-06-13

    Postharvest technologies have allowed horticultural industries to meet the global demands of local and large-scale production and intercontinental distribution of fresh produce that have high nutritional and sensory quality. Harvested products are metabolically active, undergoing ripening and senescence processes that must be controlled to prolong postharvest quality. Inadequate management of these processes can result in major losses in nutritional and quality attributes, outbreaks of foodborne pathogens and financial loss for all players along the supply chain, from growers to consumers. Optimal postharvest treatments for fresh produce seek to slow down physiological processes of senescence and maturation, reduce/inhibit development of physiological disorders and minimize the risk of microbial growth and contamination. In addition to basic postharvest technologies of temperature management, an array of others have been developed including various physical (heat, irradiation and edible coatings), chemical (antimicrobials, antioxidants and anti-browning) and gaseous treatments. This article examines the current status on postharvest treatments of fresh produce and emerging technologies, such as plasma and ozone, that can be used to maintain quality, reduce losses and waste of fresh produce. It also highlights further research needed to increase our understanding of the dynamic response of fresh produce to various postharvest treatments.

  11. Designing Experiments to Discriminate Families of Logic Models

    PubMed Central

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G.; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input–output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration. PMID:26389116

  12. Cognitive Modeling of Video Game Player User Experience

    NASA Technical Reports Server (NTRS)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  13. Neutral null models for diversity in serial transfer evolution experiments.

    PubMed

    Harpak, Arbel; Sella, Guy

    2014-09-01

    Evolution experiments with microorganisms coupled with genome-wide sequencing now allow for the systematic study of population genetic processes under a wide range of conditions. In learning about these processes in natural, sexual populations, neutral models that describe the behavior of diversity and divergence summaries have played a pivotal role. It is therefore natural to ask whether neutral models, suitably modified, could be useful in the context of evolution experiments. Here, we introduce coalescent models for polymorphism and divergence under the most common experimental evolution assay, a serial transfer experiment. This relatively simple setting allows us to address several issues that could affect diversity patterns in evolution experiments, whether selection is operating or not: the transient behavior of neutral polymorphism in an experiment beginning from a single clone, the effects of randomness in the timing of cell division and noisiness in population size in the dilution stage. In our analyses and discussion, we emphasize the implications for experiments aimed at measuring diversity patterns and making inferences about population genetic processes based on these measurements.

  14. Harmonic Oscillator Model for Radin's Markov-Chain Experiments

    NASA Astrophysics Data System (ADS)

    Sheehan, D. P.; Wright, J. H.

    2006-10-01

    The conscious observer stands as a central figure in the measurement problem of quantum mechanics. Recent experiments by Radin involving linear Markov chains driven by random number generators illuminate the role and temporal dynamics of observers interacting with quantum mechanically labile systems. In this paper a Lagrangian interpretation of these experiments indicates that the evolution of Markov chain probabilities can be modeled as damped harmonic oscillators. The results are best interpreted in terms of symmetric equicausal determinism rather than strict retrocausation, as posited by Radin. Based on the present analysis, suggestions are made for more advanced experiments.

  15. Harmonic Oscillator Model for Radin's Markov-Chain Experiments

    SciTech Connect

    Sheehan, D. P.; Wright, J. H.

    2006-10-16

    The conscious observer stands as a central figure in the measurement problem of quantum mechanics. Recent experiments by Radin involving linear Markov chains driven by random number generators illuminate the role and temporal dynamics of observers interacting with quantum mechanically labile systems. In this paper a Lagrangian interpretation of these experiments indicates that the evolution of Markov chain probabilities can be modeled as damped harmonic oscillators. The results are best interpreted in terms of symmetric equicausal determinism rather than strict retrocausation, as posited by Radin. Based on the present analysis, suggestions are made for more advanced experiments.

  16. Cryogenic Tank Modeling for the Saturn AS-203 Experiment

    NASA Technical Reports Server (NTRS)

    Grayson, Gary D.; Lopez, Alfredo; Chandler, Frank O.; Hastings, Leon J.; Tucker, Stephen P.

    2006-01-01

    A computational fluid dynamics (CFD) model is developed for the Saturn S-IVB liquid hydrogen (LH2) tank to simulate the 1966 AS-203 flight experiment. This significant experiment is the only known, adequately-instrumented, low-gravity, cryogenic self pressurization test that is well suited for CFD model validation. A 4000-cell, axisymmetric model predicts motion of the LH2 surface including boil-off and thermal stratification in the liquid and gas phases. The model is based on a modified version of the commercially available FLOW3D software. During the experiment, heat enters the LH2 tank through the tank forward dome, side wall, aft dome, and common bulkhead. In both model and test the liquid and gases thermally stratify in the low-gravity natural convection environment. LH2 boils at the free surface which in turn increases the pressure within the tank during the 5360 second experiment. The Saturn S-IVB tank model is shown to accurately simulate the self pressurization and thermal stratification in the 1966 AS-203 test. The average predicted pressurization rate is within 4% of the pressure rise rate suggested by test data. Ullage temperature results are also in good agreement with the test where the model predicts an ullage temperature rise rate within 6% of the measured data. The model is based on first principles only and includes no adjustments to bring the predictions closer to the test data. Although quantitative model validation is achieved or one specific case, a significant step is taken towards demonstrating general use of CFD for low-gravity cryogenic fluid modeling.

  17. Data Assimilation and Model Evaluation Experiments - North Atlantic Basin; Preliminary Experiment Plan.

    DTIC Science & Technology

    1994-12-01

    and will be allowed to evolve as the experiment proceeds. A brief description by participants of models and data assimilation methods are included....describes the approach to implement a comparative environment in which to assess numerical ocean model nowcast/forecast capabilities and data assimilation ... methods and techniques. Goals are stated which provide direction for the long term, the next five years, and specifically for the next two years. A

  18. Cellular Shape Memory Alloy Structures: Experiments & Modeling (Part 1)

    DTIC Science & Technology

    2012-08-01

    AFOSR  Grant  #FA9550-­‐08-­‐1-­‐0313 Cellular  Shape  Memory   Alloy  Structures:   Experiments  &  Modeling J.  Shaw  (UM...2012 4. TITLE AND SUBTITLE Cellular Shape Memory Alloy Structures: Experiments & Modeling (Part 1) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...dense,  0.37  g/cc) Combine benefits of light-weight cellular structures with Shape Memory Alloy (SMA) adaptive behavior CombinaKon •Amplified

  19. Modeling a set of heavy oil aqueous pyrolysis experiments

    SciTech Connect

    Thorsness, C.B.; Reynolds, J.G.

    1996-11-01

    Aqueous pyrolysis experiments, aimed at mild upgrading of heavy oil, were analyzed using various computer models. The primary focus of the analysis was the pressure history of the closed autoclave reactors obtained during the heating of the autoclave to desired reaction temperatures. The models used included a means of estimating nonideal behavior of primary components with regard to vapor liquid equilibrium. The modeling indicated that to match measured autoclave pressures, which often were well below the vapor pressure of water at a given temperature, it was necessary to incorporate water solubility in the oil phase and an activity model for the water in the oil phase which reduced its fugacity below that of pure water. Analysis also indicated that the mild to moderate upgrading of the oil which occurred in experiments that reached 400{degrees}C or more using a FE(III) 2-ethylhexanoate could be reasonably well characterized by a simple first order rate constant of 1.7xl0{sup 8} exp(-20000/T)s{sup {minus}l}. Both gas production and API gravity increase were characterized by this rate constant. Models were able to match the complete pressure history of the autoclave experiments fairly well with relatively simple equilibria models. However, a consistent lower than measured buildup in pressure at peak temperatures was noted in the model calculations. This phenomena was tentatively attributed to an increase in the amount of water entering the vapor phase caused by a change in its activity in the oil phase.

  20. Community Climate System Model (CCSM) Experiments and Output Data

    DOE Data Explorer

    The National Center for Atmospheric Research (NCAR) created the first version of the Community Climate Model (CCM) in 1983 as a global atmosphere model. It was improved in 1994 when NCAR, with support from the National Science Foundation (NSF), developed and incorporated a Climate System Model (CSM) that included atmosphere, land surface, ocean, and sea ice. As the capabilities of the model grew, so did interest in its applications and changes in how it would be managed. A workshop in 1996 set the future management structure, marked the beginning of the second phase of the model, a phase that included full participation of the scientific community, and also saw additional financial support, including support from the Department of Energy. In recognition of these changes, the model was renamed to the Community Climate System Model (CCSM). It began to function as a model with the interactions of land, sea, and air fully coupled, providing computer simulations of Earth's past climate, its present climate, and its possible future climate. The CCSM website at http://www2.cesm.ucar.edu/ describes some of the research that has been done since then: A 300-year run has been performed using the CSM, and results from this experiment have appeared in a special issue of theJournal of Climate, 11, June, 1998. A 125-year experiment has been carried out in which carbon dioxide was described to increase at 1% per year from its present concentration to approximately three times its present concentration. More recently, the Climate of the 20th Century experiment was run, with carbon dioxide and other greenhouse gases and sulfate aerosols prescribed to evolve according to our best knowledge from 1870 to the present. Three scenarios for the 21st century were developed: a "business as usual" experiment, in which greenhouse gases are assumed to increase with no economic constraints; an experiment using the Intergovernmental Panel on Climate Change (IPCC) Scenario A1; and a "policy

  1. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false âFresh,â âfreshly frozen,â âfresh frozen,â âfrozen... frozen,” “fresh frozen,” “frozen fresh.” The terms defined in this section may be used on the label or in... state and has not been frozen or subjected to any form of thermal processing or any other form...

  2. Mathematical modeling of isotope labeling experiments for metabolic flux analysis.

    PubMed

    Nargund, Shilpa; Sriram, Ganesh

    2014-01-01

    Isotope labeling experiments (ILEs) offer a powerful methodology to perform metabolic flux analysis. However, the task of interpreting data from these experiments to evaluate flux values requires significant mathematical modeling skills. Toward this, this chapter provides background information and examples to enable the reader to (1) model metabolic networks, (2) simulate ILEs, and (3) understand the optimization and statistical methods commonly used for flux evaluation. A compartmentalized model of plant glycolysis and pentose phosphate pathway illustrates the reconstruction of a typical metabolic network, whereas a simpler example network illustrates the underlying metabolite and isotopomer balancing techniques. We also discuss the salient features of commonly used flux estimation software 13CFLUX2, Metran, NMR2Flux+, FiatFlux, and OpenFLUX. Furthermore, we briefly discuss methods to improve flux estimates. A graphical checklist at the end of the chapter provides a reader a quick reference to the mathematical modeling concepts and resources.

  3. Modeling of detachment experiments at DIII-D

    DOE PAGES

    Canik, John M.; Briesemeister, Alexis R.; Lasnier, C. J.; ...

    2014-11-26

    Edge fluid–plasma/kinetic–neutral modeling of well-diagnosed DIII-D experiments is performed in order to document in detail how well certain aspects of experimental measurements are reproduced within the model as the transition to detachment is approached. Results indicate, that at high densities near detachment onset, the poloidal temperature profile produced in the simulations agrees well with that measured in experiment. However, matching the heat flux in the model requires a significant increase in the radiated power compared to what is predicted using standard chemical sputtering rates. Lastly, these results suggest that the model is adequate to predict the divertor temperature, provided thatmore » the discrepancy in radiated power level can be resolved.« less

  4. Modeling of detachment experiments at DIII-D

    SciTech Connect

    Canik, John M.; Briesemeister, Alexis R.; Lasnier, C. J.; Leonard, A. W.; Lore, J. D.; McLean, A. G.; Watkins, J. G.

    2014-11-26

    Edge fluid–plasma/kinetic–neutral modeling of well-diagnosed DIII-D experiments is performed in order to document in detail how well certain aspects of experimental measurements are reproduced within the model as the transition to detachment is approached. Results indicate, that at high densities near detachment onset, the poloidal temperature profile produced in the simulations agrees well with that measured in experiment. However, matching the heat flux in the model requires a significant increase in the radiated power compared to what is predicted using standard chemical sputtering rates. Lastly, these results suggest that the model is adequate to predict the divertor temperature, provided that the discrepancy in radiated power level can be resolved.

  5. Freshly brewed continental crust

    NASA Astrophysics Data System (ADS)

    Gazel, E.; Hayes, J. L.; Caddick, M. J.; Madrigal, P.

    2015-12-01

    Earth's crust is the life-sustaining interface between our planet's deep interior and surface. Basaltic crusts similar to Earth's oceanic crust characterize terrestrial planets in the solar system while the continental masses, areas of buoyant, thick silicic crust, are a unique characteristic of Earth. Therefore, understanding the processes responsible for the formation of continents is fundamental to reconstructing the evolution of our planet. We use geochemical and geophysical data to reconstruct the evolution of the Central American Land Bridge (Costa Rica and Panama) over the last 70 Ma. We also include new preliminary data from a key turning point (~12-6 Ma) from the evolution from an oceanic arc depleted in incompatible elements to a juvenile continental mass in order to evaluate current models of continental crust formation. We also discovered that seismic P-waves (body waves) travel through the crust at velocities closer to the ones observed in continental crust worldwide. Based on global statistical analyses of all magmas produced today in oceanic arcs compared to the global average composition of continental crust we developed a continental index. Our goal was to quantitatively correlate geochemical composition with the average P-wave velocity of arc crust. We suggest that although the formation and evolution of continents may involve many processes, melting enriched oceanic crust within a subduction zone, a process probably more common in the Achaean where most continental landmasses formed, can produce the starting material necessary for juvenile continental crust formation.

  6. Experiments and Modeling of G-Jitter Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Leslie, F. W.; Ramachandran, N.; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    While there is a general understanding of the acceleration environment onboard an orbiting spacecraft, past research efforts in the modeling and analysis area have still not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter can use to assess how an experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling for better understanding the effect of residual gravity and gentler on experiments. The approach is to use magnetic fluids that respond to an imposed magnetic field gradient in much the same way as fluid density responds to a gravitational field. By utilizing a programmable power source in conjunction with an electromagnet, both static and dynamic body forces can be simulated in lab experiments. The paper provides an overview of the technique and includes recent results from the experiments.

  7. Design of spatial experiments: Model fitting and prediction

    SciTech Connect

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  8. Demonstrating the Experimenting Society Model with Classwide Behavior Management Interventions.

    ERIC Educational Resources Information Center

    Johnson, Taya C.; Stoner, Gary; Green, Susan K.

    1996-01-01

    Demonstrates the experimenting society model using data-based decision making and collaborative consultation to evaluate behavior-management intervention strategies in 25 seventh graders. Each intervention results in improved behavior, but active teaching of classroom rules was determined to be most effective. (Author/JDM)

  9. ASTP fluid transfer measurement experiment. [using breadboard model

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.

    1974-01-01

    The ASTP fluid transfer measurement experiment flight system design concept was verified by the demonstration and test of a breadboard model. In addition to the breadboard effort, a conceptual design of the corresponding flight system was generated and a full scale mockup fabricated. A preliminary CEI specification for the flight system was also prepared.

  10. A Paired Compositions Model for Round-Robin Experiments

    ERIC Educational Resources Information Center

    Gleason, John R.; Halperin, Silas

    1975-01-01

    Investigation of the effects of a series of treatment conditions upon some social behaviors may require observation of subjects mutually paired, in round-robin fashion. Data arising from such experiments are difficult to analyze, partly because they do not fit neatly into standard designs. A model is presented. (Author/BJG)

  11. Design of Experiments, Model Calibration and Data Assimilation

    SciTech Connect

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  12. Multicomponent reactive transport modeling of uranium bioremediation field experiments

    NASA Astrophysics Data System (ADS)

    Fang, Yilin; Yabusaki, Steven B.; Morrison, Stan J.; Amonette, James P.; Long, Philip E.

    2009-10-01

    A reaction network integrating abiotic and microbially mediated reactions has been developed to simulate biostimulation field experiments at a former Uranium Mill Tailings Remedial Action (UMTRA) site in Rifle, Colorado. The reaction network was calibrated using data from the 2002 field experiment, after which it was applied without additional calibration to field experiments performed in 2003 and 2007. The robustness of the model specification is significant in that (1) the 2003 biostimulation field experiment was performed with 3 times higher acetate concentrations than the previous biostimulation in the same field plot (i.e., the 2002 experiment), and (2) the 2007 field experiment was performed in a new unperturbed plot on the same site. The biogeochemical reactive transport simulations accounted for four terminal electron-accepting processes (TEAPs), two distinct functional microbial populations, two pools of bioavailable Fe(III) minerals (iron oxides and phyllosilicate iron), uranium aqueous and surface complexation, mineral precipitation and dissolution. The conceptual model for bioavailable iron reflects recent laboratory studies with sediments from the UMTRA site that demonstrated that the bulk (˜90%) of initial Fe(III) bioreduction is associated with phyllosilicate rather than oxide forms of iron. The uranium reaction network includes a U(VI) surface complexation model based on laboratory studies with Rifle site sediments and aqueous complexation reactions that include ternary complexes (e.g., calcium-uranyl-carbonate). The bioreduced U(IV), Fe(II), and sulfide components produced during the experiments are strongly associated with the solid phases and may play an important role in long-term uranium immobilization.

  13. Evaluation of a Neuromechanical Walking Control Model Using Disturbance Experiments

    PubMed Central

    Song, Seungmoon; Geyer, Hartmut

    2017-01-01

    Neuromechanical simulations have been used to study the spinal control of human locomotion which involves complex mechanical dynamics. So far, most neuromechanical simulation studies have focused on demonstrating the capability of a proposed control model in generating normal walking. As many of these models with competing control hypotheses can generate human-like normal walking behaviors, a more in-depth evaluation is required. Here, we conduct the more in-depth evaluation on a spinal-reflex-based control model using five representative gait disturbances, ranging from electrical stimulation to mechanical perturbation at individual leg joints and at the whole body. The immediate changes in muscle activations of the model are compared to those of humans across different gait phases and disturbance magnitudes. Remarkably similar response trends for the majority of investigated muscles and experimental conditions reinforce the plausibility of the reflex circuits of the model. However, the model's responses lack in amplitude for two experiments with whole body disturbances suggesting that in these cases the proposed reflex circuits need to be amplified by additional control structures such as location-specific cutaneous reflexes. A model that captures these selective amplifications would be able to explain both steady and reactive spinal control of human locomotion. Neuromechanical simulations that investigate hypothesized control models are complementary to gait experiments in better understanding the control of human locomotion. PMID:28381996

  14. Manipulators with flexible links: A simple model and experiments

    NASA Technical Reports Server (NTRS)

    Shimoyama, Isao; Oppenheim, Irving J.

    1989-01-01

    A simple dynamic model proposed for flexible links is briefly reviewed and experimental control results are presented for different flexible systems. A simple dynamic model is useful for rapid prototyping of manipulators and their control systems, for possible application to manipulator design decisions, and for real time computation as might be applied in model based or feedforward control. Such a model is proposed, with the further advantage that clear physical arguments and explanations can be associated with its simplifying features and with its resulting analytical properties. The model is mathematically equivalent to Rayleigh's method. Taking the example of planar bending, the approach originates in its choice of two amplitude variables, typically chosen as the link end rotations referenced to the chord (or the tangent) motion of the link. This particular choice is key in establishing the advantageous features of the model, and it was used to support the series of experiments reported.

  15. Scattering Models and Basic Experiments in the Microwave Regime

    NASA Technical Reports Server (NTRS)

    Fung, A. K.; Blanchard, A. J. (Principal Investigator)

    1985-01-01

    The objectives of research over the next three years are: (1) to develop a randomly rough surface scattering model which is applicable over the entire frequency band; (2) to develop a computer simulation method and algorithm to simulate scattering from known randomly rough surfaces, Z(x,y); (3) to design and perform laboratory experiments to study geometric and physical target parameters of an inhomogeneous layer; (4) to develop scattering models for an inhomogeneous layer which accounts for near field interaction and multiple scattering in both the coherent and the incoherent scattering components; and (5) a comparison between theoretical models and measurements or numerical simulation.

  16. Physical mechanism of the Schwarzschild effect in film dosimetry—theoretical model and comparison with experiments

    NASA Astrophysics Data System (ADS)

    Djouguela, A.; Kollhoff, R.; Rühmann, A.; Willborn, K. C.; Harder, D.; Poppe, B.

    2006-09-01

    In consideration of the importance of film dosimetry for the dosimetric verification of IMRT treatment plans, the Schwarzschild effect or failure of the reciprocity law, i.e. the reduction of the net optical density under 'protraction' or 'fractionation' conditions at constant dose, has been experimentally studied for Kodak XOMAT-V (Martens et al 2002 Phys. Med. Biol. 47 2221-34) and EDR 2 dosimetry films (Djouguela et al 2005 Phys. Med. Biol. 50 N317-N321). It is known that this effect results from the competition between two solid-state physics reactions involved in the latent-image formation of the AgBr crystals, the aggregation of two Ag atoms freshly formed from Ag+ ions near radiation-induced occupied electron traps and the spontaneous decomposition of the Ag atoms. In this paper, we are developing a mathematical model of this mechanism which shows that the interplay of the mean lifetime τ of the Ag atoms with the time pattern of the irradiation determines the magnitude of the observed effects of the temporal dose distribution on the net optical density. By comparing this theory with our previous protraction experiments and recent fractionation experiments in which the duration of the pause between fractions was varied, a value of the time constant τ of roughly 10 s at room temperature has been determined for EDR 2. The numerical magnitude of the Schwarzschild effect in dosimetry films under the conditions generally met in radiotherapy amounts to only a few per cent of the net optical density (net OD), so that it can frequently be neglected from the viewpoint of clinical applications. But knowledge of the solid-state physical mechanism and a description in terms of a mathematical model involving a typical time constant of about 10 s are now available to estimate the magnitude of the effect should the necessity arise, i.e. in cases of large fluctuations of the temporal pattern of film exposure.

  17. Electromagnetic sunscreen model: design of experiments on particle specifications.

    PubMed

    Lécureux, Marie; Deumié, Carole; Enoch, Stefan; Sergent, Michelle

    2015-10-01

    We report a numerical study on sunscreen design and optimization. Thanks to the combined use of electromagnetic modeling and design of experiments, we are able to screen the most relevant parameters of mineral filters and to optimize sunscreens. Several electromagnetic modeling methods are used depending on the type of particles, density of particles, etc. Both the sun protection factor (SPF) and the UVB/UVA ratio are considered. We show that the design of experiments' model should include interactions between materials and other parameters. We conclude that the material of the particles is a key parameter for the SPF and the UVB/UVA ratio. Among the materials considered, none is optimal for both. The SPF is also highly dependent on the size of the particles.

  18. Analysis of a DNA simulation model through hairpin melting experiments

    PubMed Central

    Linak, Margaret C.; Dorfman, Kevin D.

    2010-01-01

    We compare the predictions of a two-bead Brownian dynamics simulation model to melting experiments of DNA hairpins with complementary AT or GC stems and noninteracting loops in buffer A. This system emphasizes the role of stacking and hydrogen bonding energies, which are characteristics of DNA, rather than backbone bending, stiffness, and excluded volume interactions, which are generic characteristics of semiflexible polymers. By comparing high throughput data on the open-close transition of various DNA hairpins to the corresponding simulation data, we (1) establish a suitable metric to compare the simulations to experiments, (2) find a conversion between the simulation and experimental temperatures, and (3) point out several limitations of the model, including the lack of G-quartets and cross stacking effects. Our approach and experimental data can be used to validate similar coarse-grained simulation models. PMID:20886965

  19. Analysis of the Second Model Parameter Estimation Experiment Workshop Results

    NASA Astrophysics Data System (ADS)

    Duan, Q.; Schaake, J.; Koren, V.; Mitchell, K.; Lohmann, D.

    2002-05-01

    The goal of Model Parameter Estimation Experiment (MOPEX) is to investigate techniques for the a priori parameter estimation for land surface parameterization schemes of atmospheric models and for hydrologic models. A comprehensive database has been developed which contains historical hydrometeorologic time series data and land surface characteristics data for 435 basins in the United States and many international basins. A number of international MOPEX workshops have been convened or planned for MOPEX participants to share their parameter estimation experience. The Second International MOPEX Workshop is held in Tucson, Arizona, April 8-10, 2002. This paper presents the MOPEX goal/objectives and science strategy. Results from our participation in developing and testing of the a priori parameter estimation procedures for the National Weather Service (NWS) Sacramento Soil Moisture Accounting (SAC-SMA) model, the Simple Water Balance (SWB) model, and the National Center for Environmental Prediction Center (NCEP) NOAH Land Surface Model (NOAH LSM) are highlighted. The test results will include model simulations using both a priori parameters and calibrated parameters for 12 basins selected for the Tucson MOPEX Workshop.

  20. Experiences & Tools from Modeling Instruction Applied to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Cervenec, J.; Landis, C. E.

    2012-12-01

    The Framework for K-12 Science Education calls for stronger curricular connections within the sciences, greater depth in understanding, and tasks higher on Bloom's Taxonomy. Understanding atmospheric sciences draws on core knowledge traditionally taught in physics, chemistry, and in some cases, biology. If this core knowledge is not conceptually sound, well retained, and transferable to new settings, understanding the causes and consequences of climate changes become a task in memorizing seemingly disparate facts to a student. Fortunately, experiences and conceptual tools have been developed and refined in the nationwide network of Physics Modeling and Chemistry Modeling teachers to build necessary understanding of conservation of mass, conservation of energy, particulate nature of matter, kinetic molecular theory, and particle model of light. Context-rich experiences are first introduced for students to construct an understanding of these principles and then conceptual tools are deployed for students to resolve misconceptions and deepen their understanding. Using these experiences and conceptual tools takes an investment of instructional time, teacher training, and in some cases, re-envisioning the format of a science classroom. There are few financial barriers to implementation and students gain a greater understanding of the nature of science by going through successive cycles of investigation and refinement of their thinking. This presentation shows how these experiences and tools could be used in an Earth Science course to support students developing conceptually rich understanding of the atmosphere and connections happening within.

  1. Design and modeling of small scale multiple fracturing experiments

    SciTech Connect

    Cuderman, J F

    1981-12-01

    Recent experiments at the Nevada Test Site (NTS) have demonstrated the existence of three distinct fracture regimes. Depending on the pressure rise time in a borehole, one can obtain hydraulic, multiple, or explosive fracturing behavior. The use of propellants rather than explosives in tamped boreholes permits tailoring of the pressure risetime over a wide range since propellants having a wide range of burn rates are available. This technique of using the combustion gases from a full bore propellant charge to produce controlled borehole pressurization is termed High Energy Gas Fracturing (HEGF). Several series of HEGF, in 0.15 m and 0.2 m diameter boreholes at 12 m depths, have been completed in a tunnel complex at NTS where mineback permitted direct observation of fracturing obtained. Because such large experiments are costly and time consuming, smaller scale experiments are desirable, provided results from small experiments can be used to predict fracture behavior in larger boreholes. In order to design small scale gas fracture experiments, the available data from previous HEGF experiments were carefully reviewed, analytical elastic wave modeling was initiated, and semi-empirical modeling was conducted which combined predictions for statically pressurized boreholes with experimental data. The results of these efforts include (1) the definition of what constitutes small scale experiments for emplacement in a tunnel complex at the Nevada Test Site, (2) prediction of average crack radius, in ash fall tuff, as a function of borehole size and energy input per unit length, (3) definition of multiple-hydraulic and multiple-explosive fracture boundaries as a function of boreholes size and surface wave velocity, (4) semi-empirical criteria for estimating stress and acceleration, and (5) a proposal that multiple fracture orientations may be governed by in situ stresses.

  2. First Results of the Regional Earthquake Likelihood Models Experiment

    USGS Publications Warehouse

    Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.

    2010-01-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).

  3. Computer simulation models are implementable as replacements for animal experiments.

    PubMed

    Badyal, Dinesh K; Modgill, Vikas; Kaur, Jasleen

    2009-04-01

    It has become increasingly difficult to perform animal experiments, because of issues related to the procurement of animals, and strict regulations and ethical issues related to their use. As a result, it is felt that the teaching of pharmacology should be more clinically oriented and that unnecessary animal experimentation should be avoided. Although a number of computer simulation models (CSMs) are available, they are not being widely used. Interactive demonstrations were conducted to encourage the departmental faculty to use CSMs. Four different animal experiments were selected, that dealt with actions of autonomic drugs. The students observed demonstrations of animal experiments involving conventional methods and the use of CSMs. This was followed by hands-on experience of the same experiment, but using CSMs in small groups, instead of hands-on experience with the animal procedures. Test scores and feedback showed that there was better understanding of the mechanisms of action of the drugs, gained in a shorter time. The majority of the students found the teaching programme used to be good to excellent. CSMs can be used repeatedly and independently by students, and this avoids unnecessary experimentation and also causing pain and trauma to animals. The CSM programme can be implemented in existing teaching schedules for pharmacology undergraduate teaching with basic infrastructure support, and is readily adaptable for use by other institutes.

  4. Analysis of NIF experiments with the minimal energy implosion model

    SciTech Connect

    Cheng, B. Kwan, T. J. T.; Wang, Y. M.; Merrill, F. E.; Batha, S. H.; Cerjan, C. J.

    2015-08-15

    We apply a recently developed analytical model of implosion and thermonuclear burn to fusion capsule experiments performed at the National Ignition Facility that used low-foot and high-foot laser pulse formats. Our theoretical predictions are consistent with the experimental data. Our studies, together with neutron image analysis, reveal that the adiabats of the cold fuel in both low-foot and high-foot experiments are similar. That is, the cold deuterium-tritium shells in those experiments are all in a high adiabat state at the time of peak implosion velocity. The major difference between low-foot and high-foot capsule experiments is the growth of the shock-induced instabilities developed at the material interfaces which lead to fuel mixing with ablator material. Furthermore, we have compared the NIF capsules performance with the ignition criteria and analyzed the alpha particle heating in the NIF experiments. Our analysis shows that alpha heating was appreciable only in the high-foot experiments.

  5. Analysis of NIF experiments with the minimal energy implosion model

    NASA Astrophysics Data System (ADS)

    Cheng, B.; Kwan, T. J. T.; Wang, Y. M.; Merrill, F. E.; Cerjan, C. J.; Batha, S. H.

    2015-08-01

    We apply a recently developed analytical model of implosion and thermonuclear burn to fusion capsule experiments performed at the National Ignition Facility that used low-foot and high-foot laser pulse formats. Our theoretical predictions are consistent with the experimental data. Our studies, together with neutron image analysis, reveal that the adiabats of the cold fuel in both low-foot and high-foot experiments are similar. That is, the cold deuterium-tritium shells in those experiments are all in a high adiabat state at the time of peak implosion velocity. The major difference between low-foot and high-foot capsule experiments is the growth of the shock-induced instabilities developed at the material interfaces which lead to fuel mixing with ablator material. Furthermore, we have compared the NIF capsules performance with the ignition criteria and analyzed the alpha particle heating in the NIF experiments. Our analysis shows that alpha heating was appreciable only in the high-foot experiments.

  6. Development of a Fresh Osteochondral Allograft Program Outside North America

    PubMed Central

    Tírico, Luís Eduardo Passarelli; Demange, Marco Kawamura; Santos, Luiz Augusto Ubirajara; de Rezende, Márcia Uchoa; Helito, Camilo Partezani; Gobbi, Riccardo Gomes; Pécora, José Ricardo; Croci, Alberto Tesconi; Bugbee, William Dick

    2015-01-01

    Objective To standardize and to develop a fresh osteochondral allograft protocol of procurement, processing and surgical utilization in Brazil. This study describes the steps recommended to make fresh osteochondral allografts a viable treatment option in a country without previous fresh allograft availability. Design The process involves regulatory process modification, developing and establishing procurement, and processing and surgical protocols. Results Legislation: Fresh osteochondral allografts were not feasible in Brazil until 2009 because the law prohibited preservation of fresh grafts at tissue banks. We approved an amendment that made it legal to preserve fresh grafts for 30 days from 2°C to 6°C in tissue banks. Procurement: We changed the protocol of procurement to decrease tissue contamination. All tissues were procured in an operating room. Processing: Processing of the grafts took place within 12 hours of tissue recovery. A serum-free culture media with antibiotics was developed to store the grafts. Surgeries: We have performed 8 fresh osteochondral allografts on 8 knees obtaining grafts from 5 donors. Mean preoperative International Knee Documentation Committee (IKDC) score was 31.99 ± 13.4, improving to 81.26 ± 14.7 at an average of 24 months’ follow-up. Preoperative Knee Injury and Oseoarthritis Outcome Score (KOOS) score was 46.8 ± 20.9 and rose to 85.24 ± 13.9 after 24 months. Mean preoperative Merle D’Aubigne-Postel score was 8.75 ± 2.25 rising to 16.1 ± 2.59 at 24 months’ follow-up. Conclusion To our knowledge, this is the first report of fresh osteochondral allograft transplantation in South America. We believe that this experience may be of value for physicians in countries that are trying to establish an osteochondral allograft transplant program. PMID:27375837

  7. Multicomponent reactive transport modeling of uranium bioremediation field experiments

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Morrison, Stan J.; Amonette, James E.; Long, Philip E.

    2009-10-15

    Biostimulation field experiments with acetate amendment are being performed at a former uranium mill tailings site in Rifle, Colorado, to investigate subsurface processes controlling in situ bioremediation of uranium-contaminated groundwater. An important part of the research is identifying and quantifying field-scale models of the principal terminal electron-accepting processes (TEAPs) during biostimulation and the consequent biogeochemical impacts to the subsurface receiving environment. Integrating abiotic chemistry with the microbially mediated TEAPs in the reaction network brings into play geochemical observations (e.g., pH, alkalinity, redox potential, major ions, and secondary minerals) that the reactive transport model must recognize. These additional constraints provide for a more systematic and mechanistic interpretation of the field behaviors during biostimulation. The reaction network specification developed for the 2002 biostimulation field experiment was successfully applied without additional calibration to the 2003 and 2007 field experiments. The robustness of the model specification is significant in that 1) the 2003 biostimulation field experiment was performed with 3 times higher acetate concentrations than the previous biostimulation in the same field plot (i.e., the 2002 experiment), and 2) the 2007 field experiment was performed in a new unperturbed plot on the same site. The biogeochemical reactive transport simulations accounted for four TEAPs, two distinct functional microbial populations, two pools of bioavailable Fe(III) minerals (iron oxides and phyllosilicate iron), uranium aqueous and surface complexation, mineral precipitation, and dissolution. The conceptual model for bioavailable iron reflects recent laboratory studies with sediments from the Old Rifle Uranium Mill Tailings Remedial Action (UMTRA) site that demonstrated that the bulk (~90%) of Fe(III) bioreduction is associated with the phyllosilicates rather than the iron oxides

  8. Theoretical and experimental studies on low-temperature adsorption drying of fresh ginger

    NASA Astrophysics Data System (ADS)

    Yang, Xiaoxi; Xu, Wei; Ding, Jing; Zhao, Yi

    2006-03-01

    The working principle of low-temperature adsorption drying and the advantages of its application for biological materials drying were introduced in this paper. By using fresh ginger as the drying material, the effects of temperature and relative humidity on its drying characteristics were examined. The results show that the drying rate increases with the temperature increasing or the humidity decreasing. The drying time to the equilibrium is almost the same under different humidity conditions, but low equilibrium moisture content can be acquired under low humidity. The shrinkage characteristics of fresh ginger were also studied. The change of its surface appearance during the drying process was characterized by the new Charged Coupled Device (CCD) and the Environmental Scanning Electron Microscopy (ESEM) technique. A mathematical model of drying dynamics was set up according to the experiments.

  9. Final Report: "Collaborative Project. Understanding the Chemical Processes That Affect Growth Rates of Freshly Nucleated Particles"

    SciTech Connect

    Smith, James N.; McMurry, Peter H.

    2015-11-12

    This final technical report describes our research activities that have, as the ultimate goal, the development of a model that explains growth rates of freshly nucleated particles. The research activities, which combine field observations with laboratory experiments, explore the relationship between concentrations of gas-phase species that contribute to growth and the rates at which those species are taken up. We also describe measurements of the chemical composition of freshly nucleated particles in a variety of locales, as well as properties (especially hygroscopicity) that influence their effects on climate. Our measurements include a self-organized, DOE-ARM funded project at the Southern Great Plains site, the New Particle Formation Study (NPFS), which took place during spring 2013. NPFS data are available to the research community on the ARM data archive, providing a unique suite observations of trace gas and aerosols that are associated with the formation and growth of atmospheric aerosol particles.

  10. Modelling the effect of shear strength on isentropic compression experiments

    NASA Astrophysics Data System (ADS)

    Thomson, Stuart; Howell, Peter; Ockendon, John; Ockendon, Hilary

    2017-01-01

    Isentropic compression experiments (ICE) are a way of obtaining equation of state information for metals undergoing violent plastic deformation. In a typical experiment, millimetre thick metal samples are subjected to pressures on the order of 10 - 102 GPa, while the yield strength of the material can be as low as 10-2 GPa. The analysis of such experiments has so far neglected the effect of shear strength, instead treating the highly plasticised metal as an inviscid compressible fluid. However making this approximation belies the basic elastic nature of a solid object. A more accurate method should strive to incorporate the small but measurable effects of shear strength. Here we present a one-dimensional mathematical model for elastoplasticity at high stress which allows for both compressibility and the shear strength of the material. In the limit of zero yield stress this model reproduces the hydrodynamic models currently used to analyse ICEs. Numerical solutions of the governing equations will then be presented for problems relevant to ICEs in order to investigate the effects of shear strength compared with a model based purely on hydrodynamics.

  11. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  12. Fracture Mechanics Modelling of an In Situ Concrete Spalling Experiment

    NASA Astrophysics Data System (ADS)

    Siren, Topias; Uotinen, Lauri; Rinne, Mikael; Shen, Baotang

    2015-07-01

    During the operation of nuclear waste disposal facilities, some sprayed concrete reinforced underground spaces will be in use for approximately 100 years. During this time of use, the local stress regime will be altered by the radioactive decay heat. The change in the stress state will impose high demands on sprayed concrete, as it may suffer stress damage or lose its adhesion to the rock surface. It is also unclear what kind of support pressure the sprayed concrete layer will apply to the rock. To investigate this, an in situ experiment is planned in the ONKALO underground rock characterization facility at Olkiluoto, Finland. A vertical experimental hole will be concreted, and the surrounding rock mass will be instrumented with heat sources, in order to simulate an increase in the surrounding stress field. The experiment is instrumented with an acoustic emission system for the observation of rock failure and temperature, as well as strain gauges to observe the thermo-mechanical interactive behaviour of the concrete and rock at several levels, in both rock and concrete. A thermo-mechanical fracture mechanics study is necessary for the prediction of the damage before the experiment, in order to plan the experiment and instrumentation, and for generating a proper prediction/outcome study due to the special nature of the in situ experiment. The prediction of acoustic emission patterns is made by Fracod 2D and the model later compared to the actual observed acoustic emissions. The fracture mechanics model will be compared to a COMSOL Multiphysics 3D model to study the geometrical effects along the hole axis.

  13. Modeling Contaminants in AP-MS/MS Experiments

    PubMed Central

    Lavallée-Adam, Mathieu; Cloutier, Philippe; Coulombe, Benoit; Blanchette, Mathieu

    2015-01-01

    Identification of protein–protein interactions (PPI) by affinity purification (AP) coupled with tandem mass spectrometry (AP-MS/MS) produces large data sets with high rates of false positives. This is in part because of contamination at the AP level (due to gel contamination, nonspecific binding to the TAP columns in the context of tandem affinity purification, insufficient purification, etc.). In this paper, we introduce a Bayesian approach to identify false-positive PPIs involving contaminants in AP-MS/MS experiments. Specifically, we propose a confidence assessment algorithm (called Decontaminator) that builds a model of contaminants using a small number of representative control experiments. It then uses this model to determine whether the Mascot score of a putative prey is significantly larger than what was observed in control experiments and assigns it a p-value and a false discovery rate. We show that our method identifies contaminants better than previously used approaches and results in a set of PPIs with a larger overlap with databases of known PPIs. Our approach will thus allow improved accuracy in PPI identification while reducing the number of control experiments required. PMID:21117706

  14. Chemical and Flowfield Modeling for Enhanced Analysis of Contamination Experiments

    NASA Technical Reports Server (NTRS)

    Braunstein, Matthew; Finchum, Andy (Technical Monitor)

    2001-01-01

    This paper describes the application of a new Direct Simulation Monte Carlo (DSMC) code, the Molecular Beam Simulator (MBS), which is designed to analyze laboratory scale molecular beam-surface (and crossed-beam) experiments. The MBS is primarily intended to model experiments associated with spacecraft contamination effects, but it can also be used to simulate a variety of surface chemistry and reactive flow measurements. The MBS code is fully three-dimensional, includes a wide-range of chemical processes, and can model one or multiple pulsed (non-steady) sources. As an example application of the MBS code, a fast, pulsed, oxygen atom-surface experiment which examines the chemistry behind erosion of graphite by oxygen atoms is analyzed. Unsteady DSMC simulations show that experimental observations of excited molecular states after the pulse has hit the surface are consistent with two distinct chemical mechanisms: a direct one where the excited molecules are formed on the surface, and a two-step mechanism where ground state molecules formed on the surface are collisionally excited after they leave the surface by trailing oxygen atoms in the pulse. Further DSMC calculations suggest experiments which can distinguish between these mechanisms.

  15. Integrated modeling of LHCD experiment on Alcator C-Mod

    SciTech Connect

    Shiraiwa, S.; Bonoli, P.; Parker, R.; Wallace, G.

    2014-02-12

    Recent progress in integrating the latest LHCD model based on ray-tracing into the Integrated Plasma Simulator (IPS) is reported. IPS, a python based framework for time dependent tokamak simulation, was expanded recently to incorporate LHCD simulation using GENRAY/CQL3D (ray-tracing/3D Fokker-Planck package). Using GENRAY/CQL3D in the IPS framework, it becomes possible to include parasitic LHCD power loss near the plasma edge, which was found to be important in experiments particularly at high density as expected on reactors. Moreover, it allows for evolving the velocity distribution function in 4 D (ν{sub ∥}, ν⊥, r/a, t) space self-consistently. In order to validate the code, IPS is applied to LHCD experiments on Alctor C-Mod. In this paper, a LHCD experiment performed at the density of n{sub e}∼0.5×10{sup 20}m{sup −3} where good LHCD efficiency and the development of internal transport barrier (ITB) was reported, is modelled in a predictive mode and the result is compared with experiment.

  16. Danish heathland manipulation experiment data in Model-Data-Fusion

    NASA Astrophysics Data System (ADS)

    Thum, Tea; Peylin, Philippe; Ibrom, Andreas; Van Der Linden, Leon; Beier, Claus; Bacour, Cédric; Santaren, Diego; Ciais, Philippe

    2013-04-01

    In ecosystem manipulation experiments (EMEs) the ecosystem is artificially exposed to different environmental conditions that aim to simulate circumstances in future climate. At Danish EME site Brandbjerg the responses of a heathland to drought, warming and increased atmospheric CO2 concentration are studied. The warming manipulation is realized by passive nighttime warming. The measurements include control plots as well as replicates for each three treatment separately and in combination. The Brandbjerg heathland ecosystem is dominated by heather and wavy hairgrass. These experiments provide excellent data for validation and development of ecosystem models. In this work we used a generic vegetation model ORCHIDEE with Model-Data-Fusion (MDF) approach. ORCHIDEE model is a process-based model that describes the exchanges of carbon, water and energy between the atmosphere and the vegetation. It can be run at different spatial scales from global to site level. Different vegetation types are described in ORCHIDEE as plant functional types. In MDF we are using observations from the site to optimize the model parameters. This enables us to assess the modelling errors and the performance of the model for different manipulation treatments. This insight will inform us whether the different processes are adequately modelled or if the model is missing some important processes. We used a genetic algorithm in the MDF. The data available from the site included measurements of aboveground biomass, heterotrophic soil respiration and total ecosystem respiration from years 2006-2008. The biomass was measured six times doing this period. The respiration measurements were done with manual chamber measurements. For the soil respiration we used results from an empirical model that has been developed for the site. This enabled us to have more data for the MDF. Before the MDF we performed a sensitivity analysis of the model parameters to different data streams. Fifteen most influential

  17. Explanatory Models and Illness Experience of People Living with HIV

    PubMed Central

    2016-01-01

    Research into explanatory models of disease and illness typically explores people’s conceptual understanding, and emphasizes differences between patient and provider models. However, the explanatory models framework of etiology, time and mode of onset of symptoms, pathophysiology, course of sickness, and treatment is built on categories characteristic of biomedical understanding. It is unclear how well these map onto people’s lived experience of illness, and to the extent they do, how they translate. Scholars have previously studied the experience of people living with HIV through the lenses of stigma and identity theory. Here, through in-depth qualitative interviews with 32 people living with HIV in the northeast United States, we explored the experience and meanings of living with HIV more broadly using the explanatory models framework. We found that identity reformation is a major challenge for most people following the HIV diagnosis, and can be understood as a central component of the concept of course of illness. Salient etiological explanations are not biological, but rather social, such as betrayal, or living in a specific cultural milieu, and often self-evaluative. Given that symptoms can now largely be avoided through adherence to treatment, they are most frequently described in terms of observation of others who have not been adherent, or the resolution of symptoms following treatment. The category of pathophysiology is not ordinarily very relevant to the illness experience, as few respondents have any understanding of the mechanism of pathogenesis in HIV, nor much interest in it. Treatment has various personal meanings, both positive and negative, often profound. For people to engage successfully in treatment and live successfully with HIV, mechanistic explanation is of little significance. Rather, positive psychological integration of health promoting behaviors is of central importance. PMID:26971285

  18. Numerical modeling of injection experiments at The Geysers

    SciTech Connect

    Pruess, Karsten; Enedy, Steve

    1993-01-28

    Data from injection experiments in the southeast Geysers are presented that show strong interference (both negative and positive) with a neighboring production well. Conceptual and numerical models are developed that explain the negative interference (decline of production rate) in terms of heat transfer limitations and water-vapor relative permeability effects. Recovery and overrecovery following injection shut-in are attributed to boiling of injected fluid, with heat of vaporization provided by the reservoir rocks.

  19. Numerical modeling of injection experiments at The Geysers

    SciTech Connect

    Pruess, K.; Enedy, S.

    1993-01-01

    Data from injection experiments in the southeast Geysers are presented that show strong interference (both negative and positive) with a neighboring production well. Conceptual and numerical models are developed that explain the negative interference (decline of production rate) in terms of heat transfer limitations and water-vapor relative permeability effects. Recovery and over-recovery following injection shut-in are attributed to boiling of injected fluid, with heat of vaporization provided by the reservoir rocks.

  20. Hypergraph-Based Recognition Memory Model for Lifelong Experience

    PubMed Central

    2014-01-01

    Cognitive agents are expected to interact with and adapt to a nonstationary dynamic environment. As an initial process of decision making in a real-world agent interaction, familiarity judgment leads the following processes for intelligence. Familiarity judgment includes knowing previously encoded data as well as completing original patterns from partial information, which are fundamental functions of recognition memory. Although previous computational memory models have attempted to reflect human behavioral properties on the recognition memory, they have been focused on static conditions without considering temporal changes in terms of lifelong learning. To provide temporal adaptability to an agent, in this paper, we suggest a computational model for recognition memory that enables lifelong learning. The proposed model is based on a hypergraph structure, and thus it allows a high-order relationship between contextual nodes and enables incremental learning. Through a simulated experiment, we investigate the optimal conditions of the memory model and validate the consistency of memory performance for lifelong learning. PMID:25371665

  1. Early experiences building a software quality prediction model

    NASA Technical Reports Server (NTRS)

    Agresti, W. W.; Evanco, W. M.; Smith, M. C.

    1990-01-01

    Early experiences building a software quality prediction model are discussed. The overall research objective is to establish a capability to project a software system's quality from an analysis of its design. The technical approach is to build multivariate models for estimating reliability and maintainability. Data from 21 Ada subsystems were analyzed to test hypotheses about various design structures leading to failure-prone or unmaintainable systems. Current design variables highlight the interconnectivity and visibility of compilation units. Other model variables provide for the effects of reusability and software changes. Reported results are preliminary because additional project data is being obtained and new hypotheses are being developed and tested. Current multivariate regression models are encouraging, explaining 60 to 80 percent of the variation in error density of the subsystems.

  2. Analogue experiments as benchmarks for models of lava flow emplacement

    NASA Astrophysics Data System (ADS)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide

  3. An energetic model for macromolecules unfolding in stretching experiments

    PubMed Central

    De Tommasi, D.; Millardi, N.; Puglisi, G.; Saccomandi, G.

    2013-01-01

    We propose a simple approach, based on the minimization of the total (entropic plus unfolding) energy of a two-state system, to describe the unfolding of multi-domain macromolecules (proteins, silks, polysaccharides, nanopolymers). The model is fully analytical and enlightens the role of the different energetic components regulating the unfolding evolution. As an explicit example, we compare the analytical results with a titin atomic force microscopy stretch-induced unfolding experiment showing the ability of the model to quantitatively reproduce the experimental behaviour. In the thermodynamic limit, the sawtooth force–elongation unfolding curve degenerates to a constant force unfolding plateau. PMID:24047874

  4. Fatigue Damage of Collagenous Tissues: Experiment, Modeling and Simulation Studies

    PubMed Central

    Martin, Caitlin; Sun, Wei

    2017-01-01

    Mechanical fatigue damage is a critical issue for soft tissues and tissue-derived materials, particularly for musculoskeletal and cardiovascular applications; yet, our understanding of the fatigue damage process is incomplete. Soft tissue fatigue experiments are often difficult and time-consuming to perform, which has hindered progress in this area. However, the recent development of soft-tissue fatigue-damage constitutive models has enabled simulation-based fatigue analyses of tissues under various conditions. Computational simulations facilitate highly controlled and quantitative analyses to study the distinct effects of various loading conditions and design features on tissue durability; thus, they are advantageous over complex fatigue experiments. Although significant work to calibrate the constitutive models from fatigue experiments and to validate predictability remains, further development in these areas will add to our knowledge of soft-tissue fatigue damage and will facilitate the design of durable treatments and devices. In this review, the experimental, modeling, and simulation efforts to study collagenous tissue fatigue damage are summarized and critically assessed. PMID:25955007

  5. Modeling of high power ICRF heating experiments on TFTR

    SciTech Connect

    Phillips, C.K.; Wilson, J.R.; Bell, M.; Fredrickson, E.; Hosea, J.C.; Majeski, R.; Ramsey, A.; Rogers, J.H.; Schilling, G.; Skinner, C.; Stevens, J.E.; Taylor, G.; Wong, K.L. . Plasma Physics Lab.); Khudaleev, A.; Petrov, M.P. ); Murakami, M. )

    1993-01-01

    Over the past two years, ICRF heating experiments have been performed on TFTR in the hydrogen minority heating regime with power levels reaching 11.2 MW in helium-4 majority plasmas and 8.4 MW in deuterium majority plasmas. For these power levels, the minority hydrogen ions, which comprise typically less than 10% of the total electron density, evolve into la very energetic, anisotropic non-Maxwellian distribution. Indeed, the excess perpendicular stored energy in these plasmas associated with the energetic minority tail ions is often as high as 25% of the total stored energy, as inferred from magnetic measurements. Enhanced losses of 0.5 MeV protons consistent with the presence of an energetic hydrogen component have also been observed. In ICRF heating experiments on JET at comparable and higher power levels and with similar parameters, it has been suggested that finite banana width effects have a noticeable effect on the ICRF power deposition. In particular, models indicate that finite orbit width effects lead to a reduction in the total stored energy and of the tail energy in the center of the plasma, relative to that predicted by the zero banana width models. In this paper, detailed comparisons between the calculated ICRF power deposition profiles and experimentally measured quantities will be presented which indicate that significant deviations from the zero banana width models occur even for modest power levels (P[sub rf] [approximately] 6 MW) in the TFTR experiments.

  6. Modeling of high power ICRF heating experiments on TFTR

    SciTech Connect

    Phillips, C.K.; Wilson, J.R.; Bell, M.; Fredrickson, E.; Hosea, J.C.; Majeski, R.; Ramsey, A.; Rogers, J.H.; Schilling, G.; Skinner, C.; Stevens, J.E.; Taylor, G.; Wong, K.L.; Khudaleev, A.; Petrov, M.P.; Murakami, M.

    1993-04-01

    Over the past two years, ICRF heating experiments have been performed on TFTR in the hydrogen minority heating regime with power levels reaching 11.2 MW in helium-4 majority plasmas and 8.4 MW in deuterium majority plasmas. For these power levels, the minority hydrogen ions, which comprise typically less than 10% of the total electron density, evolve into la very energetic, anisotropic non-Maxwellian distribution. Indeed, the excess perpendicular stored energy in these plasmas associated with the energetic minority tail ions is often as high as 25% of the total stored energy, as inferred from magnetic measurements. Enhanced losses of 0.5 MeV protons consistent with the presence of an energetic hydrogen component have also been observed. In ICRF heating experiments on JET at comparable and higher power levels and with similar parameters, it has been suggested that finite banana width effects have a noticeable effect on the ICRF power deposition. In particular, models indicate that finite orbit width effects lead to a reduction in the total stored energy and of the tail energy in the center of the plasma, relative to that predicted by the zero banana width models. In this paper, detailed comparisons between the calculated ICRF power deposition profiles and experimentally measured quantities will be presented which indicate that significant deviations from the zero banana width models occur even for modest power levels (P{sub rf} {approximately} 6 MW) in the TFTR experiments.

  7. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 2 2014-04-01 2014-04-01 false âFresh,â âfreshly frozen,â âfresh frozen,â âfrozen fresh.â 101.95 Section 101.95 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION FOOD LABELING Specific Requirements...

  8. Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments

    NASA Astrophysics Data System (ADS)

    Vezer, M. A.

    2010-12-01

    Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between

  9. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  10. Opinion Formation by Social Influence: From Experiments to Modeling

    PubMed Central

    Chacoma, Andrés; Zanette, Damián H.

    2015-01-01

    Predicting different forms of collective behavior in human populations, as the outcome of individual attitudes and their mutual influence, is a question of major interest in social sciences. In particular, processes of opinion formation have been theoretically modeled on the basis of a formal similarity with the dynamics of certain physical systems, giving rise to an extensive collection of mathematical models amenable to numerical simulation or even to exact solution. Empirical ground for these models is however largely missing, which confine them to the level of mere metaphors of the real phenomena they aim at explaining. In this paper we present results of an experiment which quantifies the change in the opinions given by a subject on a set of specific matters under the influence of others. The setup is a variant of a recently proposed experiment, where the subject’s confidence on his or her opinion was evaluated as well. In our realization, which records the quantitative answers of 85 subjects to 20 questions before and after an influence event, the focus is put on characterizing the change in answers and confidence induced by such influence. Similarities and differences with the previous version of the experiment are highlighted. We find that confidence changes are to a large extent independent of any other recorded quantity, while opinion changes are strongly modulated by the original confidence. On the other hand, opinion changes are not influenced by the initial difference with the reference opinion. The typical time scales on which opinion varies are moreover substantially longer than those of confidence change. Experimental results are then used to estimate parameters for a dynamical agent-based model of opinion formation in a large population. In the context of the model, we study the convergence to full consensus and the effect of opinion leaders on the collective distribution of opinions. PMID:26517825

  11. Opinion Formation by Social Influence: From Experiments to Modeling.

    PubMed

    Chacoma, Andrés; Zanette, Damián H

    2015-01-01

    Predicting different forms of collective behavior in human populations, as the outcome of individual attitudes and their mutual influence, is a question of major interest in social sciences. In particular, processes of opinion formation have been theoretically modeled on the basis of a formal similarity with the dynamics of certain physical systems, giving rise to an extensive collection of mathematical models amenable to numerical simulation or even to exact solution. Empirical ground for these models is however largely missing, which confine them to the level of mere metaphors of the real phenomena they aim at explaining. In this paper we present results of an experiment which quantifies the change in the opinions given by a subject on a set of specific matters under the influence of others. The setup is a variant of a recently proposed experiment, where the subject's confidence on his or her opinion was evaluated as well. In our realization, which records the quantitative answers of 85 subjects to 20 questions before and after an influence event, the focus is put on characterizing the change in answers and confidence induced by such influence. Similarities and differences with the previous version of the experiment are highlighted. We find that confidence changes are to a large extent independent of any other recorded quantity, while opinion changes are strongly modulated by the original confidence. On the other hand, opinion changes are not influenced by the initial difference with the reference opinion. The typical time scales on which opinion varies are moreover substantially longer than those of confidence change. Experimental results are then used to estimate parameters for a dynamical agent-based model of opinion formation in a large population. In the context of the model, we study the convergence to full consensus and the effect of opinion leaders on the collective distribution of opinions.

  12. RANS Modeling of Benchmark Shockwave / Boundary Layer Interaction Experiments

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nick; Vyas, Manan; Yoder, Dennis

    2010-01-01

    This presentation summarizes the computations of a set of shock wave / turbulent boundary layer interaction (SWTBLI) test cases using the Wind-US code, as part of the 2010 American Institute of Aeronautics and Astronautics (AIAA) shock / boundary layer interaction workshop. The experiments involve supersonic flows in wind tunnels with a shock generator that directs an oblique shock wave toward the boundary layer along one of the walls of the wind tunnel. The Wind-US calculations utilized structured grid computations performed in Reynolds-averaged Navier-Stokes mode. Three turbulence models were investigated: the Spalart-Allmaras one-equation model, the Menter Shear Stress Transport wavenumber-angular frequency two-equation model, and an explicit algebraic stress wavenumber-angular frequency formulation. Effects of grid resolution and upwinding scheme were also considered. The results from the CFD calculations are compared to particle image velocimetry (PIV) data from the experiments. As expected, turbulence model effects dominated the accuracy of the solutions with upwinding scheme selection indicating minimal effects.!

  13. Modeling and experiments with a subsea laser radar system

    NASA Astrophysics Data System (ADS)

    Bjarnar, Morten L.; Klepsvik, John O.; Nilsen, Jan E.

    1991-12-01

    Subsea laser radar has a potential for accurate 3-D imaging in water. A prototype system has been developed at Seatex A/S in Norway as a prestudy for the design of an underwater laser radar scanning system. Parallel to the experimental studies, a numerical radiometric model has been developed as an aid in the system design. This model simulates a raster scanning laser radar system for in-water use. Thus this parametric model allows for analysis and predictions of the performance of such a sensor system. Experiments have been conducted to test a prototype laser radar system. The experimental system tested uses a Q-switched, frequency doubled, Nd:YAG solid state laser operating at a wavelength of 532 nm, which is close to optimal for use in water due to the small light attenuation around this wavelength in seawater. The laser has an energy output of 6 (mu) J per pulse 1 kHz pulse repetition frequency (PRF) and the receiver aperture is approximately 17 cm2. The laser radar prototype was mounted onto an accurate pan and tilt unit in order to test the 3-D imaging capabilities. The ultimate goal of the development is to provide an optical 3-D imaging tool for distances comparable to high frequency sonars with a range capability of approximately 30 - 50 m. The results from these experiments are presented. The present implementation of the scanning laser radar model is described and some outputs from the simulation are shown.

  14. Gravitational Acceleration Effects on Macrosegregation: Experiment and Computational Modeling

    NASA Technical Reports Server (NTRS)

    Leon-Torres, J.; Curreri, P. A.; Stefanescu, D. M.; Sen, S.

    1999-01-01

    Experiments were performed under terrestrial gravity (1g) and during parabolic flights (10-2 g) to study the solidification and macrosegregation patterns of Al-Cu alloys. Alloys having 2% and 5% Cu were solidified against a chill at two different cooling rates. Microscopic and Electron Microprobe characterization was used to produce microstructural and macrosegregation maps. In all cases positive segregation occurred next to the chill because shrinkage flow, as expected. This positive segregation was higher in the low-g samples, apparently because of the higher heat transfer coefficient. A 2-D computational model was used to explain the experimental results. The continuum formulation was employed to describe the macroscopic transports of mass, energy, and momentum, associated with the solidification phenomena, for a two-phase system. The model considers that liquid flow is driven by thermal and solutal buoyancy, and by solidification shrinkage. The solidification event was divided into two stages. In the first one, the liquid containing freely moving equiaxed grains was described through the relative viscosity concept. In the second stage, when a fixed dendritic network was formed after dendritic coherency, the mushy zone was treated as a porous medium. The macrosegregation maps and the cooling curves obtained during experiments were used for validation of the solidification and segregation model. The model can explain the solidification and macrosegregation patterns and the differences between low- and high-gravity results.

  15. Reverse draw solute permeation in forward osmosis: modeling and experiments.

    PubMed

    Phillip, William A; Yong, Jui Shan; Elimelech, Menachem

    2010-07-01

    Osmotically driven membrane processes are an emerging set of technologies that show promise in water and wastewater treatment, desalination, and power generation. The effective operation of these systems requires that the reverse flux of draw solute from the draw solution into the feed solution be minimized. A model was developed that describes the reverse permeation of draw solution across an asymmetric membrane in forward osmosis operation. Experiments were carried out to validate the model predictions with a highly soluble salt (NaCl) as a draw solution and a cellulose acetate membrane designed for forward osmosis. Using independently determined membrane transport coefficients, strong agreement between the model predictions and experimental results was observed. Further analysis shows that the reverse flux selectivity, the ratio of the forward water flux to the reverse solute flux, is a key parameter in the design of osmotically driven membrane processes. The model predictions and experiments demonstrate that this parameter is independent of the draw solution concentration and the structure of the membrane support layer. The value of the reverse flux selectivity is determined solely by the selectivity of the membrane active layer.

  16. Multi-scale modelling for HEDP experiments on Orion

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  17. Beyond Performance: A Motivational Experiences Model of Stereotype Threat

    PubMed Central

    Thoman, Dustin B.; Smith, Jessi L.; Brown, Elizabeth R.; Chase, Justin; Lee, Joo Young K.

    2013-01-01

    The contributing role of stereotype threat (ST) to learning and performance decrements for stigmatized students in highly evaluative situations has been vastly documented and is now widely known by educators and policy makers. However, recent research illustrates that underrepresented and stigmatized students’ academic and career motivations are influenced by ST more broadly, particularly through influences on achievement orientations, sense of belonging, and intrinsic motivation. Such a focus moves conceptualizations of ST effects in education beyond the influence on a student’s performance, skill level, and feelings of self-efficacy per se to experiencing greater belonging uncertainty and lower interest in stereotyped tasks and domains. These negative experiences are associated with important outcomes such as decreased persistence and domain identification, even among students who are high in achievement motivation. In this vein, we present and review support for the Motivational Experience Model of ST, a self-regulatory model framework for integrating research on ST, achievement goals, sense of belonging, and intrinsic motivation to make predictions for how stigmatized students’ motivational experiences are maintained or disrupted, particularly over long periods of time. PMID:23894223

  18. A curriculum model for an integrated senior year clinical experience.

    PubMed

    Wukasch, R N; Blue, C L; Overbay, J

    2000-01-01

    Transformations in the delivery of health care from hospital to community have brought about many changes in nursing practice. These, in turn, have necessitated alterations in the education of nursing students, the curricula, and clinical experiences. Confident that nursing is an independent practice, exclusive of the health care setting, our faculty decided to direct our teaching efforts to reflect changes in health care delivery. We restructured our baccalaureate nursing program's senior level clinical education experience to prepare students to meet the needs of the clients we serve--the community--and the demands of professional nursing education. In doing so, we have supported Ryan's definition of community, which includes "all settings where consumers seek health care" (1, p. 140). In response to the recommendation by the pew health professions commission for new models of content integration "between education and the highly managed and integrated systems of care" (2, p. 51), a decision was made to merge three senior level clinical courses--pediatrics, public health, and nursing leadership and management--into one integrated experience. This process required an examination of collective values and beliefs with respect to course content and learning experiences. The challenge was to examine "sacred cows" and eliminate redundancies and replication of learning activities.

  19. Medical students' emotional development in early clinical experience: a model.

    PubMed

    Helmich, Esther; Bolhuis, Sanneke; Laan, Roland; Dornan, Tim; Koopmans, Raymond

    2014-08-01

    Dealing with emotions is a critical feature of professional behaviour. There are no comprehensive theoretical models, however, explaining how medical students learn about emotions. We aimed to explore factors affecting their emotions and how they learn to deal with emotions in themselves and others. During a first-year nursing attachment in hospitals and nursing homes, students wrote daily about their most impressive experiences, explicitly reporting what they felt, thought, and did. In a subsequent interview, they discussed those experiences in greater detail. Following a grounded theory approach, we conducted a constant comparative analysis, collecting and then interpreting data, and allowing the interpretation to inform subsequent data collection. Impressive experiences set up tensions, which gave rise to strong emotions. We identified four 'axes' along which tensions were experienced: 'idealism versus reality', 'critical distance versus adaptation', 'involvement versus detachment' and 'feeling versus displaying'. We found many factors, which influenced how respondents relieved those tensions. Their personal attributes and social relationships both inside and outside the medical community were important ones. Respondents' positions along the different dimensions, as determined by the balance between attributes and tensions, shaped their learning outcomes. Medical students' emotional development occurs through active participation in medical practice and having impressive experiences within relationships with patients and others on wards. Tensions along four dimensions give rise to strong emotions. Gaining insight into the many conditions that influence students' learning about emotions might support educators and supervisors in fostering medical students' emotional and professional development.

  20. Modeling of the jack rabbit series of experiments with a temperature based reactive burn model

    NASA Astrophysics Data System (ADS)

    Desbiens, Nicolas

    2017-01-01

    The Jack Rabbit experiments, performed by Lawrence Livermore National Laboratory, focus on detonation wave corner turning and shock desensitization. Indeed, while important for safety or charge design, the behaviour of explosives in these regimes is poorly understood. In this paper, our temperature based reactive burn model is calibrated for LX-17 and compared to the Jack Rabbit data. It is shown that our model can reproduce the corner turning and shock desensitization behaviour of four out of the five experiments.

  1. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  2. Modeling, simulation, and experiments of coating growth on nanofibers

    NASA Astrophysics Data System (ADS)

    Clemons, C. B.; Hamrick, P.; Heminger, J.; Kreider, K. L.; Young, G. W.; Buldum, A.; Evans, E.; Zhang, G.

    2008-02-01

    This work is a comparison of modeling and simulation results with experiments for an integrated experimental/modeling investigation of a procedure to coat nanofibers and core-clad nanostructures with thin film materials using plasma enhanced physical vapor deposition. In the experimental effort, electrospun polymer nanofibers are coated with metallic materials under different operating conditions to observe changes in the coating morphology. The modeling effort focuses on linking simple models at the reactor level, nanofiber level and atomic level to form a comprehensive model. The comprehensive model leads to the definition of an evolution equation for the coating free surface around an isolated nanofiber. This evolution equation was previously derived and solved under conditions of a nearly circular coating, with a concentration field that was only radially dependent and that was independent of the location of the coating free surface. These assumptions permitted the development of analytical expressions for the concentration field. The present work does not impose the above-mentioned conditions and considers numerical simulations of the concentration field that couple with level set simulations of the evolution equation for the coating free surface. Further, the cases of coating an isolated fiber as well as a multiple fiber mat are considered. Simulation results are compared with experimental results as the reactor pressure and power, as well as the nanofiber mat porosity, are varied.

  3. Modeling ion-induced electrons in the High Current Experiment

    SciTech Connect

    Stoltz, P.H.; Verboncoeur, J.P.; Cohen, R.H.; Molvik, A.W.; Vay, J.-L.; Veitzer, S.A.

    2006-05-15

    A primary concern for high current ion accelerators is contaminant electrons. These electrons can interfere with the beam ions, causing emittance growth and beam loss. Numerical simulation is a main tool for understanding the interaction of the ion beam with the contaminant electrons, but these simulations then require accurate models of electron generation. These models include ion-induced electron emission from ions hitting the beam pipe walls or diagnostics. However, major codes for modeling ion beam transport are written in different programming languages and used on different computing platforms. For electron generation models to be maximally useful, researchers should be able to use them easily from many languages and platforms. A model of ion-induced electrons including the electron energy distribution is presented here, including a discussion of how to use the Babel software tool to make these models available in multiple languages and how to use the GNU Autotools to make them available on multiple platforms. An application to simulation of the end region of the High Current Experiment is shown. These simulations show formation of a virtual cathode with a potential energy well of amplitude 12.0 eV, approximately six times the most probable energy of the ion-induced electrons. Oscillations of the virtual cathode could lead to possible longitudinal and transverse modulation of the density of the electrons moving out of the virtual cathode.

  4. Dynamic crack initiation toughness : experiments and peridynamic modeling.

    SciTech Connect

    Foster, John T.

    2009-10-01

    This is a dissertation on research conducted studying the dynamic crack initiation toughness of a 4340 steel. Researchers have been conducting experimental testing of dynamic crack initiation toughness, K{sub Ic}, for many years, using many experimental techniques with vastly different trends in the results when reporting K{sub Ic} as a function of loading rate. The dissertation describes a novel experimental technique for measuring K{sub Ic} in metals using the Kolsky bar. The method borrows from improvements made in recent years in traditional Kolsky bar testing by using pulse shaping techniques to ensure a constant loading rate applied to the sample before crack initiation. Dynamic crack initiation measurements were reported on a 4340 steel at two different loading rates. The steel was shown to exhibit a rate dependence, with the recorded values of K{sub Ic} being much higher at the higher loading rate. Using the knowledge of this rate dependence as a motivation in attempting to model the fracture events, a viscoplastic constitutive model was implemented into a peridynamic computational mechanics code. Peridynamics is a newly developed theory in solid mechanics that replaces the classical partial differential equations of motion with integral-differential equations which do not require the existence of spatial derivatives in the displacement field. This allows for the straightforward modeling of unguided crack initiation and growth. To date, peridynamic implementations have used severely restricted constitutive models. This research represents the first implementation of a complex material model and its validation. After showing results comparing deformations to experimental Taylor anvil impact for the viscoplastic material model, a novel failure criterion is introduced to model the dynamic crack initiation toughness experiments. The failure model is based on an energy criterion and uses the K{sub Ic} values recorded experimentally as an input. The failure model

  5. Modeling of and experiments on electromagnetic levitation for materials processing

    NASA Astrophysics Data System (ADS)

    Hyers, Robert W.

    Electromagnetic levitation (EML) is an important experimental technique for research in materials processing. It has been applied for many years to a wide variety of research areas, including studies of nucleation and growth, phase selection, reaction kinetics, and thermophysical property measurements. The work presented here contributes to a more fundamental understanding of three aspects of levitation systems: modeling of electromagnetic effects, modeling of fluid flow characteristics, and experiments to measure surface tension and viscosity in microgravity. In this work, the interaction between the electromagnetic field and the sample were modeled, and experiments to measure the surface tension and viscosity of liquid metal droplets were performed. The models use a 2-D axisymmetric formulation, and use the method of mutual inductances to calculate the currents induced in the sample. The magnetic flux density was calculated from the Biot-Savart law, and the force distribution obtained. Parametric studies of the total force and induced heating on the sample were carried out, as well as a study of the influence of different parameters on the internal flows in a liquid droplet. The oscillating current frequency has an important effect on the feasible operating range of an EML system. Optimization of both heating and positioning are discussed, and the use of frequencies far from those in current use for levitation of small droplets provides improved results. The dependences of the force and induced power on current, frequency, sample conductivity, and sample size are given. A model coupling the magnetic force calculations to a commercial finite-element fluid dynamics program is used to characterize the flows in a liquid sample, including transitions in the flow pattern. The dependence of fluid flow velocity on positioning force, sample viscosity, and oscillating current frequency is presented. These models were applied to the design of thermophysical property

  6. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  7. FuGE: Functional Genomics Experiment Object Model.

    PubMed

    Jones, Andrew R; Pizarro, Angel; Spellman, Paul; Miller, Michael

    2006-01-01

    This is an interim report on the Functional Genomics Experiment (FuGE) Object Model. FuGE is a framework for creating data standards for high-throughput biological experiments, developed by a consortium of researchers from academia and industry. FuGE supports rich annotation of samples, protocols, instruments, and software, as well as providing extension points for technology specific details. It has been adopted by microarray and proteomics standards bodies as a basis for forthcoming standards. It is hoped that standards developers for other omics techniques will join this collaborative effort; widespread adoption will allow uniform annotation of common parts of functional genomics workflows, reduce standard development and learning times through the sharing of consistent practice, and ease the construction of software for accessing and integrating functional genomics data.

  8. Rapid Testing of Fresh Concrete

    DTIC Science & Technology

    1975-05-01

    Cementforenlng, Oslo, 1952). 1.1 Orchard, 0. F., "The Effect of the Vacum Process on Concrete Mix Design ," Symposiwn on Mix Design and Qualify Control...ASTM, Vol 33, Part I (1933), pp 297-307. Orchard, D. F., "The Effect of the Vacuum Process on Concrete Mix Design ," Symposium on Mix Design and... Designed for Use in Determining Constituents of Fresh Concrete," Public floads, Vol 13, No. 9 (1932), p 151. 9 Cook, G. C, "Effect of Time of Haul

  9. Selection Experiments in the Penna Model for Biological Aging

    NASA Astrophysics Data System (ADS)

    Medeiros, G.; Idiart, M. A.; de Almeida, R. M. C.

    We consider the Penna model for biological aging to investigate correlations between early fertility and late life survival rates in populations at equilibrium. We consider inherited initial reproduction ages together with a reproduction cost translated in a probability that mother and offspring die at birth, depending on the mother age. For convenient sets of parameters, the equilibrated populations present genetic variability in what regards both genetically programmed death age and initial reproduction age. In the asexual Penna model, a negative correlation between early life fertility and late life survival rates naturally emerges in the stationary solutions. In the sexual Penna model, selection experiments are performed where individuals are sorted by initial reproduction age from the equilibrated populations and the separated populations are evolved independently. After a transient, a negative correlation between early fertility and late age survival rates also emerges in the sense that populations that start reproducing earlier present smaller average genetically programmed death age. These effects appear due to the age structure of populations in the steady state solution of the evolution equations. We claim that the same demographic effects may be playing an important role in selection experiments in the laboratory.

  10. Models from experiments: combinatorial drug perturbations of cancer cells

    PubMed Central

    Nelander, Sven; Wang, Weiqing; Nilsson, Björn; She, Qing-Bai; Pratilas, Christine; Rosen, Neal; Gennemark, Peter; Sander, Chris

    2008-01-01

    We present a novel method for deriving network models from molecular profiles of perturbed cellular systems. The network models aim to predict quantitative outcomes of combinatorial perturbations, such as drug pair treatments or multiple genetic alterations. Mathematically, we represent the system by a set of nodes, representing molecular concentrations or cellular processes, a perturbation vector and an interaction matrix. After perturbation, the system evolves in time according to differential equations with built-in nonlinearity, similar to Hopfield networks, capable of representing epistasis and saturation effects. For a particular set of experiments, we derive the interaction matrix by minimizing a composite error function, aiming at accuracy of prediction and simplicity of network structure. To evaluate the predictive potential of the method, we performed 21 drug pair treatment experiments in a human breast cancer cell line (MCF7) with observation of phospho-proteins and cell cycle markers. The best derived network model rediscovered known interactions and contained interesting predictions. Possible applications include the discovery of regulatory interactions, the design of targeted combination therapies and the engineering of molecular biological networks. PMID:18766176

  11. Using a high biomass plant Pennisetum hydridum to phyto-treat fresh municipal sewage sludge.

    PubMed

    Hei, Liang; Lee, Charles C C; Wang, Hui; Lin, Xiao-Yan; Chen, Xiao-Hong; Wu, Qi-Tang

    2016-10-01

    The study was carried out to investigate the use of a high biomass plant, Pennisetum hydridum, to treat municipal sewage sludge (MSS). An experiment composed of plots with four treatments, soil, fresh sludge, soil-sludge mixture and phyto-treated sludge, was conducted. It showed that the plant could not survive directly in fresh MSS when cultivated from stem cuttings. The experiment transplanting the incubated cutting with nurse medium of P. hydridum in soil and fresh MSS, showed that the plants grew normally in fresh MSS. The pilot experiment of P. hydridum and Alocasia macrorrhiza showed that the total yield and nutrient amount of P. hydridum were 9.2 times and 3.6 times more than that of A. macrorrhiza. After plant treatment, MSS was dried, stabilized and suitable to be landfilled or incinerated, with a calorific value of about 5.6MJ/kg (compared to the initial value of 1.9MJ/kg fresh sludge).

  12. Coupled Thermal-Chemical-Mechanical Modeling of Validation Cookoff Experiments

    SciTech Connect

    ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.

    2000-11-27

    The cookoff of energetic materials involves the combined effects of several physical and chemical processes. These processes include heat transfer, chemical decomposition, and mechanical response. The interaction and coupling between these processes influence both the time-to-event and the violence of reaction. The prediction of the behavior of explosives during cookoff, particularly with respect to reaction violence, is a challenging task. To this end, a joint DoD/DOE program has been initiated to develop models for cookoff, and to perform experiments to validate those models. In this paper, a series of cookoff analyses are presented and compared with data from a number of experiments for the aluminized, RDX-based, Navy explosive PBXN-109. The traditional thermal-chemical analysis is used to calculate time-to-event and characterize the heat transfer and boundary conditions. A reaction mechanism based on Tarver and McGuire's work on RDX{sup 2} was adjusted to match the spherical one-dimensional time-to-explosion data. The predicted time-to-event using this reaction mechanism compares favorably with the validation tests. Coupled thermal-chemical-mechanical analysis is used to calculate the mechanical response of the confinement and the energetic material state prior to ignition. The predicted state of the material includes the temperature, stress-field, porosity, and extent of reaction. There is little experimental data for comparison to these calculations. The hoop strain in the confining steel tube gives an estimation of the radial stress in the explosive. The inferred pressure from the measured hoop strain and calculated radial stress agree qualitatively. However, validation of the mechanical response model and the chemical reaction mechanism requires more data. A post-ignition burn dynamics model was applied to calculate the confinement dynamics. The burn dynamics calculations suffer from a lack of characterization of the confinement for the flaw

  13. Analysis of Fresh and Aged Aerosols Produced by Biomass Combustion

    NASA Astrophysics Data System (ADS)

    Holden, A. S.; Desyaterik, Y.; Laskin, A.; Laskin, J.; Schichtel, B. A.; Malm, W. C.; Kreidenweis, S. M.; Collett, J. L.

    2010-12-01

    Emissions from biomass combustion are known to influence human health, visibility, the global radiation budget, and cloud properties. Much research has been done looking at the primary emissions of wild and prescribed fires. As a result, primary smoke marker compounds, such as levoglucosan (a combustion product of cellulose), have been identified and used to determine the impact of fires on ambient air quality. However, little is known about the chemical processing occurring within smoke plumes and the resulting production of secondary organic aerosols (SOA). This likely leads to an underestimation of biomass burning impacts on particulate organic carbon (OC), often used in large-scale air quality model simulations. To better understand biomass smoke aging, hi-volume PM2.5 filter samples from two studies are compared here. Data from the Fire Lab at Missoula Experiments (FLAME) represent fresh smoke, sampled at the source of the fire. Aged smoke was collected during the Yosemite Aerosol Characterization Study (YACS), where the sampling site was days downwind from forest fires. Additional samples of aged smoke were collected at Rocky Mountain National Park and the Colorado State University Atmospheric Science Department, which were both affected by transported smoke from wildfires in southern California. Aqueous extracts of these samples have been analyzed using Liquid Chromatography coupled with a Time-of-Flight Mass Spectrometer (LC-TOF-MS) with electrospray ionization, as well as with a Linear Trap Quadrupole-Orbitrap Mass Spectrometr (LTQ-Orbitrap MS). Samples of fresh and aged smoke will be compared to help identify processes occurring during biomass smoke aging and transport. Preliminary results have shown the products of monoterpene oxidation, such as limonene, in all samples. Analysis has also shown an abundance of nitrogen-containing compounds in samples affected by biomass smoke, as well as an increase in oxidation with aged smoke samples.

  14. Electroelastic optical fiber positioning with submicrometer accuracy: Model and experiment

    NASA Astrophysics Data System (ADS)

    Kofod, Guggi; Mc Carthy, Denis N.; Krissler, Jan; Lang, Günter; Jordan, Grace

    2009-05-01

    We present accurate electromechanical measurements on a balanced push-pull dielectric elastomer actuator, demonstrating submicrometer accurate position control. An analytical model based on a simplified pure-shear dielectric elastomer film with prestretch is found to capture the voltage-displacement behavior, with reduced output due to the boundary conditions. Two complementary experiments show that actuation coefficients of 0.5-1 nm/V2 are obtainable with the demonstrated device, enabling motion control with submicrometer accuracy in a voltage range below 200 V.

  15. Discrete Element Modeling (DEM) of Triboelectrically Charged Particles: Revised Experiments

    NASA Technical Reports Server (NTRS)

    Hogue, Michael D.; Calle, Carlos I.; Curry, D. R.; Weitzman, P. S.

    2008-01-01

    In a previous work, the addition of basic screened Coulombic electrostatic forces to an existing commercial discrete element modeling (DEM) software was reported. Triboelectric experiments were performed to charge glass spheres rolling on inclined planes of various materials. Charge generation constants and the Q/m ratios for the test materials were calculated from the experimental data and compared to the simulation output of the DEM software. In this paper, we will discuss new values of the charge generation constants calculated from improved experimental procedures and data. Also, planned work to include dielectrophoretic, Van der Waals forces, and advanced mechanical forces into the software will be discussed.

  16. Experiments of reconstructing discrete atmospheric dynamic models from data (I)

    NASA Astrophysics Data System (ADS)

    Lin, Zhenshan; Zhu, Yanyu; Deng, Ziwang

    1995-03-01

    In this paper, we give some experimental results of our study in reconstructing discrete atmospheric dynamic models from data. After a great deal of numerical experiments, we found that the logistic map, x n + 1 = 1- μx {2/n}, could be used in monthly mean temperature prediction when it was approaching the chaotic region, and its predictive results were in reverse states to the practical data. This means that the nonlinear developing behavior of the monthly mean temperature system is bifurcating back into the critical chaotic states from the chaotic ones.

  17. FEM numerical model analysis of magnetic nanoparticle tumor heating experiments.

    PubMed

    Pearce, John A; Petyk, Alicia A; Hoopes, P Jack

    2014-01-01

    Iron oxide nanoparticles are currently under investigation as heating agents for hyperthermic treatment of tumors. Major determinants of effective heating include the biodistribution of magnetic materials, the minimum iron oxide loading required to achieve adequate heating, and practically achievable magnetic field strengths. These are inter-related criteria that ultimately determine the practicability of this approach to tumor treatment. Currently, we lack fundamental engineering design criteria that can be used in treatment planning and assessment. Coupling numerical models to experimental studies illuminate the underlying physical processes and can separate physical processes to determine their relative importance. Further, adding thermal damage and cell death process to the models provides valuable perspective on the likelihood of successful treatment. FEM numerical models were applied to increase the understanding of a carefully calibrated series of experiments in mouse mammary carcinoma. The numerical models results indicate that tumor loadings equivalent to approximately 1 mg of Fe3O4 per gram of tumor tissue are required to achieve adequate heating in magnetic field strengths of 34 kA/m (rms) at 160 kHz. Further, the models indicate that direct intratumoral injection of the nanoparticles results in between 1 and 20% uptake in the tissues.

  18. Experiments on fog prediction based on multi-models

    NASA Astrophysics Data System (ADS)

    Shi, C.; Wang, L.; Zhang, H.; Deng, X.

    2010-07-01

    Fog is a boundary-layer weather phenomenon with abundant water droplets or crystals that reduces visibility to less than 1 km. The low visibility on fog days usually endangers all kinds of transportation and causes huge economic losses. To numerically forecast fog, a series of numerical experiments were conducted utilizing a mesoscale meteorological model (MM5) and a one-dimension (1D) fog model (PAFOG) with detailed microphysics processes. First, the two models were coupled. MM5 provided the initial and hourly top boundary conditions (IC/BC) for PAFOG, and some other necessary input parameters, including the low, middle and high cloud covers, landuse, and geostrophic winds, etc. Thus, we can run PAFOG for any interested area. Then, the PAFOG was run using two kinds of ICs/BCs for 9 fog events observed in Nanjing during the winters of 2006 and 2007. Detailed comparisons of model results from MM5, PAFOG with two kinds of ICs/BCs for two cases with observations are presented in this paper. The results show that the couple of the two models is successful. PAFOG outperformed MM5 in simulating radiation fog; however, MM5 performed better than PAFOG in simulating advection fog. This suggests that the couple method still need to improve. The impacts of advection on fog cannot be revealed by the top boundary conditions.

  19. Constitutive modelling of brain tissue: experiment and theory.

    PubMed

    Miller, K; Chinzei, K

    1997-01-01

    Recent developments in computer-integrated and robot-aided surgery--in particular, the emergence of automatic surgical tools and robots--as well as advances in virtual reality techniques, call for closer examination of the mechanical properties of very soft tissues (such as brain, liver, kidney, etc.). The ultimate goal of our research into the biomechanics of these tissues is the development of corresponding, realistic mathematical models. This paper contains experimental results of in vitro, uniaxial, unconfined compression of swine brain tissue and discusses a single-phase, non-linear, viscoelastic tissue model. The experimental results obtained for three loading velocities, ranging over five orders of magnitude, are presented. The applied strain rates have been much lower than those applied in previous studies, focused on injury modelling. The stress-strain curves are concave upward for all compression rates containing no linear portion from which a meaningful elastic modulus might be determined. The tissue response stiffened as the loading speed increased, indicating a strong stress-strain rate dependence. The use of the single-phase model is recommended for applications in registration, surgical operation planning and training systems as well as a control system of an image-guided surgical robot. The material constants for the brain tissue are evaluated. Agreement between the proposed theoretical model and experiment is good for compression levels reaching 30% and for loading velocities varying over five orders of magnitude.

  20. Innovative Fresh Water Production Process for Fossil Fuel Plants

    SciTech Connect

    James F. Klausner; Renwei Mei; Yi Li; Jessica Knight

    2006-09-29

    This project concerns a diffusion driven desalination (DDD) process where warm water is evaporated into a low humidity air stream, and the vapor is condensed out to produce distilled water. Although the process has a low fresh water to feed water conversion efficiency, it has been demonstrated that this process can potentially produce low cost distilled water when driven by low grade waste heat. This report summarizes the progress made in the development and analysis of a Diffusion Driven Desalination (DDD) system. Detailed heat and mass transfer analyses required to size and analyze the diffusion tower using a heated water input are described. The analyses agree quite well with the current data and the information available in the literature. The direct contact condenser has also been thoroughly analyzed and the system performance at optimal operating conditions has been considered using a heated water/ambient air input to the diffusion tower. The diffusion tower has also been analyzed using a heated air input. The DDD laboratory facility has successfully been modified to include an air heating section. Experiments have been conducted over a range of parameters for two different cases: heated air/heated water and heated air/ambient water. A theoretical heat and mass transfer model has been examined for both of these cases and agreement between the experimental and theoretical data is good. A parametric study reveals that for every liquid mass flux there is an air mass flux value where the diffusion tower energy consumption is minimal and an air mass flux where the fresh water production flux is maximized. A study was also performed to compare the DDD process with different inlet operating conditions as well as different packing. It is shown that the heated air/heated water case is more capable of greater fresh water production with the same energy consumption than the ambient air/heated water process at high liquid mass flux. It is also shown that there can be

  1. A fresh look at dense hydrogen under pressure. II. Chemical and physical models aiding our understanding of evolving H-H separations.

    PubMed

    Labet, Vanessa; Hoffmann, Roald; Ashcroft, N W

    2012-02-21

    In order to explain the intricate dance of intramolecular (intra-proton-pair) H-H separations observed in a numerical laboratory of calculationally preferred static hydrogen structures under pressure, we examine two effects through discrete molecular models. The first effect, we call it physical, is of simple confinement. We review a salient model already in the literature, that of LeSar and Herschbach, of a hydrogen molecule in a spheroidal cavity. As a complement, we also study a hydrogen molecule confined along a line between two helium atoms. As the size of the cavity/confining distance decreases (a surrogate for increasing pressure), in both models the equilibrium proton separation decreases and the force constant of the stretching vibration increases. The second effect, which is an orbital or chemical factor, emerges from the electronic structure of the known molecular transition metal complexes of dihydrogen. In these the H-H bond is significantly elongated (and the vibron much decreased in frequency) as a result of depopulation of the σ(g) bonding molecular orbital of H(2), and population of the antibonding σ(u)∗ MO. The general phenomenon, long known in chemistry, is analyzed through a specific molecular model of three hydrogen molecules interacting in a ring, a motif found in some candidate structures for dense hydrogen.

  2. Dissolution-precipitation processes in tank experiments for testing numerical models for reactive transport calculations: Experiments and modelling

    NASA Astrophysics Data System (ADS)

    Poonoosamy, Jenna; Kosakowski, Georg; Van Loon, Luc R.; Mäder, Urs

    2015-06-01

    In the context of testing reactive transport codes and their underlying conceptual models, a simple 2D reactive transport experiment was developed. The aim was to use simple chemistry and design a reproducible and fast to conduct experiment, which is flexible enough to include several process couplings: advective-diffusive transport of solutes, effect of liquid phase density on advective transport, and kinetically controlled dissolution/precipitation reactions causing porosity changes. A small tank was filled with a reactive layer of strontium sulfate (SrSO4) of two different grain sizes, sandwiched between two layers of essentially non-reacting quartz sand (SiO2). A highly concentrated solution of barium chloride was injected to create an asymmetric flow field. Once the barium chloride reached the reactive layer, it forced the transformation of strontium sulfate into barium sulfate (BaSO4). Due to the higher molar volume of barium sulfate, its precipitation caused a decrease of porosity and lowered the permeability. Changes in the flow field were observed with help of dye tracer tests. The experiments were modelled using the reactive transport code OpenGeosys-GEM. Tests with non-reactive tracers performed prior to barium chloride injection, as well as the density-driven flow (due to the high concentration of barium chloride solution), could be well reproduced by the numerical model. To reproduce the mineral bulk transformation with time, two populations of strontium sulfate grains with different kinetic rates of dissolution were applied. However, a default porosity permeability relationship was unable to account for measured pressure changes. Post mortem analysis of the strontium sulfate reactive medium provided useful information on the chemical and structural changes occurring at the pore scale at the interface that were considered in our model to reproduce the pressure evolution with time.

  3. Dissolution-precipitation processes in tank experiments for testing numerical models for reactive transport calculations: Experiments and modelling.

    PubMed

    Poonoosamy, Jenna; Kosakowski, Georg; Van Loon, Luc R; Mäder, Urs

    2015-01-01

    In the context of testing reactive transport codes and their underlying conceptual models, a simple 2D reactive transport experiment was developed. The aim was to use simple chemistry and design a reproducible and fast to conduct experiment, which is flexible enough to include several process couplings: advective-diffusive transport of solutes, effect of liquid phase density on advective transport, and kinetically controlled dissolution/precipitation reactions causing porosity changes. A small tank was filled with a reactive layer of strontium sulfate (SrSO4) of two different grain sizes, sandwiched between two layers of essentially non-reacting quartz sand (SiO2). A highly concentrated solution of barium chloride was injected to create an asymmetric flow field. Once the barium chloride reached the reactive layer, it forced the transformation of strontium sulfate into barium sulfate (BaSO4). Due to the higher molar volume of barium sulfate, its precipitation caused a decrease of porosity and lowered the permeability. Changes in the flow field were observed with help of dye tracer tests. The experiments were modelled using the reactive transport code OpenGeosys-GEM. Tests with non-reactive tracers performed prior to barium chloride injection, as well as the density-driven flow (due to the high concentration of barium chloride solution), could be well reproduced by the numerical model. To reproduce the mineral bulk transformation with time, two populations of strontium sulfate grains with different kinetic rates of dissolution were applied. However, a default porosity permeability relationship was unable to account for measured pressure changes. Post mortem analysis of the strontium sulfate reactive medium provided useful information on the chemical and structural changes occurring at the pore scale at the interface that were considered in our model to reproduce the pressure evolution with time.

  4. Solute and heat transport model of the Henry and hilleke laboratory experiment.

    PubMed

    Langevin, Christian D; Dausman, Alyssa M; Sukop, Michael C

    2010-01-01

    SEAWAT is a coupled version of MODFLOW and MT3DMS designed to simulate variable-density ground water flow and solute transport. The most recent version of SEAWAT, called SEAWAT Version 4, includes new capabilities to represent simultaneous multispecies solute and heat transport. To test the new features in SEAWAT, the laboratory experiment of Henry and Hilleke (1972) was simulated. Henry and Hilleke used warm fresh water to recharge a large sand-filled glass tank. A cold salt water boundary was represented on one side. Adjustable heating pads were used to heat the bottom and left sides of the tank. In the laboratory experiment, Henry and Hilleke observed both salt water and fresh water flow systems separated by a narrow transition zone. After minor tuning of several input parameters with a parameter estimation program, results from the SEAWAT simulation show good agreement with the experiment. SEAWAT results suggest that heat loss to the room was more than expected by Henry and Hilleke, and that multiple thermal convection cells are the likely cause of the widened transition zone near the hot end of the tank. Other computer programs with similar capabilities may benefit from benchmark testing with the Henry and Hilleke laboratory experiment.

  5. Solute and heat transport model of the Henry and Hilleke laboratory experiment

    USGS Publications Warehouse

    Langevin, C.D.; Dausman, A.M.; Sukop, M.C.

    2010-01-01

    SEAWAT is a coupled version of MODFLOW and MT3DMS designed to simulate variable-density ground water flow and solute transport. The most recent version of SEAWAT, called SEAWAT Version 4, includes new capabilities to represent simultaneous multispecies solute and heat transport. To test the new features in SEAWAT, the laboratory experiment of Henry and Hilleke (1972) was simulated. Henry and Hilleke used warm fresh water to recharge a large sand-filled glass tank. A cold salt water boundary was represented on one side. Adjustable heating pads were used to heat the bottom and left sides of the tank. In the laboratory experiment, Henry and Hilleke observed both salt water and fresh water flow systems separated by a narrow transition zone. After minor tuning of several input parameters with a parameter estimation program, results from the SEAWAT simulation show good agreement with the experiment. SEAWAT results suggest that heat loss to the room was more than expected by Henry and Hilleke, and that multiple thermal convection cells are the likely cause of the widened transition zone near the hot end of the tank. Other computer programs with similar capabilities may benefit from benchmark testing with the Henry and Hilleke laboratory experiment. Journal Compilation ?? 2009 National Ground Water Association.

  6. First experience of vectorizing electromagnetic physics models for detector simulation

    SciTech Connect

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; Licht, J.de Fine; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-23

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  7. APL experience with space weather modeling and transition to operations

    NASA Astrophysics Data System (ADS)

    Zanetti, L. J.; Wing, S.

    2009-12-01

    In response to the growing space weather needs, the Johns Hopkins University Applied Physics Laboratory (APL) developed and delivered twenty two state of the art space weather products under the auspice of the University Partnering in Operational Support program, initiated in 1998. These products offer nowcasts and forecasts for the region spanning from the Sun to the Earth. Some of these products have been transitioned to the Air Force Weather Agency and other space weather centers. The transition process is quite different from research modeling, requiring additional staff with different sets of expertise. Recently, APL has developed a space weather web page to serve these products to the research and user community. For the initial stage, we have chosen ten of these products to be served from our website, which is presently still under construction. APL’s experience, lessons learned, and successes from developing space weather models, the transition to operations process and the webpage access will be shared and discussed

  8. Social Aggregation in Pea Aphids: Experiment and Random Walk Modeling

    PubMed Central

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J.; Topaz, Chad M.

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control. PMID:24376691

  9. Social aggregation in pea aphids: experiment and random walk modeling.

    PubMed

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J; Topaz, Chad M

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

  10. Toward an Improved Understanding of the Global Fresh Water Budget

    NASA Technical Reports Server (NTRS)

    Hildebrand, Peter H.

    2005-01-01

    The major components of the global fresh water cycle include the evaporation from the land and ocean surfaces, precipitation onto the Ocean and land surfaces, the net atmospheric transport of water from oceanic areas over land, and the return flow of water from the land back into the ocean. The additional components of oceanic water transport are few, principally, the mixing of fresh water through the oceanic boundary layer, transport by ocean currents, and sea ice processes. On land the situation is considerably more complex, and includes the deposition of rain and snow on land; water flow in runoff; infiltration of water into the soil and groundwater; storage of water in soil, lakes and streams, and groundwater; polar and glacial ice; and use of water in vegetation and human activities. Knowledge of the key terms in the fresh water flux budget is poor. Some components of the budget, e.g. precipitation, runoff, storage, are measured with variable accuracy across the globe. We are just now obtaining precise measurements of the major components of global fresh water storage in global ice and ground water. The easily accessible fresh water sources in rivers, lakes and snow runoff are only adequately measured in the more affluent portions of the world. presents proposals are suggesting methods of making global measurements of these quantities from space. At the same time, knowledge of the global fresh water resources under the effects of climate change is of increasing importance and the human population grows. This paper provides an overview of the state of knowledge of the global fresh water budget, evaluating the accuracy of various global water budget measuring and modeling techniques. We review the measurement capabilities of satellite instruments as compared with field validation studies and modeling approaches. Based on these analyses, and on the goal of improved knowledge of the global fresh water budget under the effects of climate change, we suggest

  11. Update on PHELIX Pulsed-Power Hydrodynamics Experiments and Modeling

    NASA Astrophysics Data System (ADS)

    Rousculp, Christopher; Reass, William; Oro, David; Griego, Jeffery; Turchi, Peter; Reinovsky, Robert; Devolder, Barbara

    2013-10-01

    The PHELIX pulsed-power driver is a 300 kJ, portable, transformer-coupled, capacitor bank capable of delivering 3-5 MA, 10 μs pulse into a low inductance load. Here we describe further testing and hydrodynamics experiments. First, a 4 nH static inductive load has been constructed. This allows for repetitive high-voltage, high-current testing of the system. Results are used in the calibration of simple circuit models and numerical simulations across a range of bank charges (+/-20 < V0 < +/-40 kV). Furthermore, a dynamic liner-on-target load experiment has been conducted to explore the shock-launched transport of particulates (diam. ~ 1 μm) from a surface. The trajectories of the particulates are diagnosed with radiography. Results are compared to 2D hydro-code simulations. Finally, initial studies are underway to assess the feasibility of using the PHELIX driver as an electromagnetic launcher for planer shock-physics experiments. Work supported by United States-DOE under contract DE-AC52-06NA25396.

  12. A new Geoengineering Model Intercomparison Project (GeoMIP) experiment designed for climate and chemistry models

    SciTech Connect

    Tilmes, S.; Mills, Mike; Niemeier, Ulrike; Schmidt, Hauke; Robock, Alan; Kravitz, Benjamin S.; Lamarque, J. F.; Pitari, G.; English, J. M.

    2015-01-15

    A new Geoengineering Model Intercomparison Project (GeoMIP) experiment "G4 specified stratospheric aerosols" (short name: G4SSA) is proposed to investigate the impact of stratospheric aerosol geoengineering on atmosphere, chemistry, dynamics, climate, and the environment. In contrast to the earlier G4 GeoMIP experiment, which requires an emission of sulfur dioxide (SO₂) into the model, a prescribed aerosol forcing file is provided to the community, to be consistently applied to future model experiments between 2020 and 2100. This stratospheric aerosol distribution, with a total burden of about 2 Tg S has been derived using the ECHAM5-HAM microphysical model, based on a continuous annual tropical emission of 8 Tg SO₂ yr⁻¹. A ramp-up of geoengineering in 2020 and a ramp-down in 2070 over a period of 2 years are included in the distribution, while a background aerosol burden should be used for the last 3 decades of the experiment. The performance of this experiment using climate and chemistry models in a multi-model comparison framework will allow us to better understand the impact of geoengineering and its abrupt termination after 50 years in a changing environment. The zonal and monthly mean stratospheric aerosol input data set is available at https://www2.acd.ucar.edu/gcm/geomip-g4-specified-stratospheric-aerosol-data-set.

  13. A new Geoengineering Model Intercomparison Project (GeoMIP) experiment designed for climate and chemistry models

    DOE PAGES

    Tilmes, S.; Mills, Mike; Niemeier, Ulrike; ...

    2015-01-15

    A new Geoengineering Model Intercomparison Project (GeoMIP) experiment "G4 specified stratospheric aerosols" (short name: G4SSA) is proposed to investigate the impact of stratospheric aerosol geoengineering on atmosphere, chemistry, dynamics, climate, and the environment. In contrast to the earlier G4 GeoMIP experiment, which requires an emission of sulfur dioxide (SO₂) into the model, a prescribed aerosol forcing file is provided to the community, to be consistently applied to future model experiments between 2020 and 2100. This stratospheric aerosol distribution, with a total burden of about 2 Tg S has been derived using the ECHAM5-HAM microphysical model, based on a continuous annualmore » tropical emission of 8 Tg SO₂ yr⁻¹. A ramp-up of geoengineering in 2020 and a ramp-down in 2070 over a period of 2 years are included in the distribution, while a background aerosol burden should be used for the last 3 decades of the experiment. The performance of this experiment using climate and chemistry models in a multi-model comparison framework will allow us to better understand the impact of geoengineering and its abrupt termination after 50 years in a changing environment. The zonal and monthly mean stratospheric aerosol input data set is available at https://www2.acd.ucar.edu/gcm/geomip-g4-specified-stratospheric-aerosol-data-set.« less

  14. Thermography and machine learning techniques for tomato freshness prediction.

    PubMed

    Xie, Jing; Hsieh, Sheng-Jen; Wang, Hong-Jin; Tan, Zuojun

    2016-12-01

    The United States and China are the world's leading tomato producers. Tomatoes account for over $2 billion annually in farm sales in the U.S. Tomatoes also rank as the world's 8th most valuable agricultural product, valued at $58 billion dollars annually, and quality is highly prized. Nondestructive technologies, such as optical inspection and near-infrared spectrum analysis, have been developed to estimate tomato freshness (also known as grades in USDA parlance). However, determining the freshness of tomatoes is still an open problem. This research (1) illustrates the principle of theory on why thermography might be able to reveal the internal state of the tomatoes and (2) investigates the application of machine learning techniques-artificial neural networks (ANNs) and support vector machines (SVMs)-in combination with transient step heating, and thermography for freshness prediction, which refers to how soon the tomatoes will decay. Infrared images were captured at a sampling frequency of 1 Hz during 40 s of heating followed by 160 s of cooling. The temperatures of the acquired images were plotted. Regions with higher temperature differences between fresh and less fresh (rotten within three days) tomatoes of approximately uniform size and shape were used as the input nodes for ANN and SVM models. The ANN model built using heating and cooling data was relatively optimal. The overall regression coefficient was 0.99. These results suggest that a combination of infrared thermal imaging and ANN modeling methods can be used to predict tomato freshness with higher accuracy than SVM models.

  15. Modeling of Carbon Migration During JET Injection Experiments

    SciTech Connect

    Strachan, J. D.; Likonen, J.; Coad, P.; Rubel, M.; Widdowson, A.; Airila, M.; Andrew, P.; Brezinsek, S.; Corrigan, G.; Esser, H. G.; Jachmich, S.; Kallenbach, A.; Kirschner, A.; Kreter, A.; Matthews, G. F.; Philipps, V.; Pitts, R. A.; Spence, J.; Stamp, M.; Wiesen, S.

    2008-10-15

    JET has performed two dedicated carbon migration experiments on the final run day of separate campaigns (2001 and 2004) using {sup 13}CH{sub 4} methane injected into repeated discharges. The EDGE2D/NIMBUS code modelled the carbon migration in both experiments. This paper describes this modelling and identifies a number of important migration pathways: (1) deposition and erosion near the injection location, (2) migration through the main chamber SOL, (3) migration through the private flux region aided by E x B drifts, and (4) neutral migration originating near the strike points. In H-Mode, type I ELMs are calculated to influence the migration by enhancing erosion during the ELM peak and increasing the long-range migration immediately following the ELM. The erosion/re-deposition cycle along the outer target leads to a multistep migration of {sup 13}C towards the separatrix which is called 'walking'. This walking created carbon neutrals at the outer strike point and led to {sup 13}C deposition in the private flux region. Although several migration pathways have been identified, quantitative analyses are hindered by experimental uncertainty in divertor leakage, and the lack of measurements at locations such as gaps and shadowed regions.

  16. Evaporation of J13 water: laboratory experiments and geochemical modeling

    SciTech Connect

    Dibley, M.J.; Knauss, K.G.; Rosenberg, N.D.

    1999-08-11

    We report results from experiments on the evaporative chemical evolution of synthetic J13 water, representative of water from well J13, a common reference water in the Yucca Mountain Project. Data include anion and cation analysis and qualitative mineral identification for a series of open system experiments, with and without crushed tuff present, conducted at sub-boiling temperatures. Ca and Mg precipitated readily as carbonates and anions Cl, F, NO{sub 3} and SO{sub 4} remained in solution in nearly identical ratios. The pH stabilized at about 10. After {approx} 1000x concentration, the minerals formed were amorphous silica, aragonite and calcite. The presence of tuff appears to have very little effect on the relative distribution of the anions in solution, except for possibly F, which had a relatively lower concentration ratio. The Si was lower in the solutions with tuff present suggesting that the tuff enhances SiO{sub 2} precipitation. Even though the tools to model highly-concentrated salt solutions are limited, we compare our experimental results with the results of geochemical models, with (perhaps) surprising good results. In response to different assumed CO{sub 2} levels, pH varied, but anion concentrations were not greatly affected.

  17. Lattice Boltzmann modeling of directional wetting: Comparing simulations to experiments

    NASA Astrophysics Data System (ADS)

    Jansen, H. Patrick; Sotthewes, Kai; van Swigchem, Jeroen; Zandvliet, Harold J. W.; Kooij, E. Stefan

    2013-07-01

    Lattice Boltzmann Modeling (LBM) simulations were performed on the dynamic behavior of liquid droplets on chemically striped patterned surfaces, ultimately with the aim to develop a predictive tool enabling reliable design of future experiments. The simulations accurately mimic experimental results, which have shown that water droplets on such surfaces adopt an elongated shape due to anisotropic preferential spreading. Details of the contact line motion such as advancing of the contact line in the direction perpendicular to the stripes exhibit pronounced similarities in experiments and simulations. The opposite of spreading, i.e., evaporation of water droplets, leads to a characteristic receding motion first in the direction parallel to the stripes, while the contact line remains pinned perpendicular to the stripes. Only when the aspect ratio is close to unity, the contact line also starts to recede in the perpendicular direction. Very similar behavior was observed in the LBM simulations. Finally, droplet movement can be induced by a gradient in surface wettability. LBM simulations show good semiquantitative agreement with experimental results of decanol droplets on a well-defined striped gradient, which move from high- to low-contact angle surfaces. Similarities and differences for all systems are described and discussed in terms of the predictive capabilities of LBM simulations to model direction wetting.

  18. Lattice Boltzmann modeling of directional wetting: comparing simulations to experiments.

    PubMed

    Jansen, H Patrick; Sotthewes, Kai; van Swigchem, Jeroen; Zandvliet, Harold J W; Kooij, E Stefan

    2013-07-01

    Lattice Boltzmann Modeling (LBM) simulations were performed on the dynamic behavior of liquid droplets on chemically striped patterned surfaces, ultimately with the aim to develop a predictive tool enabling reliable design of future experiments. The simulations accurately mimic experimental results, which have shown that water droplets on such surfaces adopt an elongated shape due to anisotropic preferential spreading. Details of the contact line motion such as advancing of the contact line in the direction perpendicular to the stripes exhibit pronounced similarities in experiments and simulations. The opposite of spreading, i.e., evaporation of water droplets, leads to a characteristic receding motion first in the direction parallel to the stripes, while the contact line remains pinned perpendicular to the stripes. Only when the aspect ratio is close to unity, the contact line also starts to recede in the perpendicular direction. Very similar behavior was observed in the LBM simulations. Finally, droplet movement can be induced by a gradient in surface wettability. LBM simulations show good semiquantitative agreement with experimental results of decanol droplets on a well-defined striped gradient, which move from high- to low-contact angle surfaces. Similarities and differences for all systems are described and discussed in terms of the predictive capabilities of LBM simulations to model direction wetting.

  19. Computer modeling of a three-dimensional steam injection experiment

    SciTech Connect

    Joshi, S.; Castanier, L.M.

    1993-08-01

    The experimental results and CT scans obtained during a steam-flooding experiment with the SUPRI 3-D steam injection laboratory model are compared with the results obtained from a numerical simulator for the same experiment. Simulation studies were carried out using the STARS (Steam and Additives Reservoir Simulator) compositional simulator. The saturation and temperature distributions obtained and heat-loss rates measured in the experimental model at different stages of steam-flooding were compared with those calculated from the numerical simulator. There is a fairly good agreement between the experimental results and the simulator output. However, the experimental scans show a greater degree of gravity override than that obtained with the simulator for the same heat-loss rates. Symmetric sides of the experimental 5-spot show asymmetric heat-loss rates contrary to theory and simulator results. Some utility programs have been written for extracting, processing and outputting the required grid data from the STARS simulator. These are general in nature and can be useful for other STARS users.

  20. Space Weathering of Olivine: Samples, Experiments and Modeling

    NASA Technical Reports Server (NTRS)

    Keller, L. P.; Berger, E. L.; Christoffersen, R.

    2016-01-01

    Olivine is a major constituent of chondritic bodies and its response to space weathering processes likely dominates the optical properties of asteroid regoliths (e.g. S- and many C-type asteroids). Analyses of olivine in returned samples and laboratory experiments provide details and insights regarding the mechanisms and rates of space weathering. Analyses of olivine grains from lunar soils and asteroid Itokawa reveal that they display solar wind damaged rims that are typically not amorphized despite long surface exposure ages, which are inferred from solar flare track densities (up to 10 (sup 7 y)). The olivine damaged rim width rapidly approaches approximately 120 nm in approximately 10 (sup 6 y) and then reaches steady-state with longer exposure times. The damaged rims are nanocrystalline with high dislocation densities, but crystalline order exists up to the outermost exposed surface. Sparse nanophase Fe metal inclusions occur in the damaged rims and are believed to be produced during irradiation through preferential sputtering of oxygen from the rims. The observed space weathering effects in lunar and Itokawa olivine grains are difficult to reconcile with laboratory irradiation studies and our numerical models that indicate that olivine surfaces should readily blister and amorphize on relatively short time scales (less than 10 (sup 3 y)). These results suggest that it is not just the ion fluence alone, but other variable, the ion flux that controls the type and extent of irradiation damage that develops in olivine. This flux dependence argues for caution in extrapolating between high flux laboratory experiments and the natural case. Additional measurements, experiments, and modeling are required to resolve the discrepancies among the observations and calculations involving solar wind processing of olivine.

  1. Finite Element Modelling of the Apollo Heat Flow Experiments

    NASA Astrophysics Data System (ADS)

    Platt, J.; Siegler, M. A.; Williams, J.

    2013-12-01

    The heat flow experiments sent on Apollo missions 15 and 17 were designed to measure the temperature gradient of the lunar regolith in order to determine the heat flux of the moon. Major problems in these experiments arose from the fact that the astronauts were not able to insert the probes below the thermal skin depth. Compounding the problem, anomalies in the data have prevented scientists from conclusively determining the temperature dependent conductivity of the soil, which enters as a linear function into the heat flow calculation, thus stymieing them in their primary goal of constraining the global heat production of the Moon. Different methods of determining the thermal conductivity have yielded vastly different results resulting in downward corrections of up to 50% in some cases from the original calculations. Along with problems determining the conductivity, the data was inconsistent with theoretical predictions of the temperature variation over time, leading some to suspect that the Apollo experiment itself changed the thermal properties of the localised area surrounding the probe. The average temperature of the regolith, according to the data, increased over time, a phenomenon that makes calculating the thermal conductivity of the soil and heat flux impossible without knowing the source of error and accounting for it. The changes, possibly resulting from as varied sources as the imprint of the Astronauts boots on the lunar surface, compacted soil around the bore stem of the probe or even heat radiating down the inside of the tube, have convinced many people that the recorded data is unusable. In order to shed some light on the possible causes of this temperature rise, we implemented a finite element model of the probe using the program COMSOL Multi-physics as well as Matlab. Once the cause of the temperature rise is known then steps can be taken to account for the failings of the experiment and increase the data's utility.

  2. Micromagnetics of Co-based media: Experiment and model (invited)

    NASA Astrophysics Data System (ADS)

    McFadyen, I. R.; Beardsley, I. A.

    1990-05-01

    The micromagnetics of Co86Cr14 thin-film longitudinal recording media has been investigated using Lorentz transmission electron microscopy, in the Fresnel and differential phase contrast (DPC) modes. The media studied were 350-Å-thick Co86Cr14 on a 200-Å Cr layer, both of which were rf sputtered onto Si3N4 membranes for ease of observation in the TEM. Different magnetization states were investigated by imaging the sample at various points in the remanent hysteresis loop. This allowed direct comparison between experimental conditions and a micromagnetic computer model which assumes the media to be an array of single-domain particles, interacting via magnetostatics and moderate integranular exchange [J.-G. Zhu and H. N. Bertram, J. Appl. Phys. 63, 3248 (1988)]. The DPC imaging technique allows vector maps and two-dimensional histograms of the integrated in-plane magnetic induction to be obtained at each magnetization state for comparison with the computer model. The scale and displacement of magnetization vortices and ripple between different states was also investigated. Both experiment and model show increasing dispersion with increasing reversal field and motion of magnetization vortices. Comparison, using the model, of individual magnetization states in an applied field and at remanence indicate a strong influence of stray fields from the media on the DPC image contrast.

  3. Produced water re-injection in a non-fresh water aquifer with geochemical reaction, hydrodynamic molecular dispersion and adsorption kinetics controlling: model development and numerical simulation

    NASA Astrophysics Data System (ADS)

    Obe, Ibidapo; Fashanu, T. A.; Idialu, Peter O.; Akintola, Tope O.; Abhulimen, Kingsley E.

    2016-12-01

    An improved produced water reinjection (PWRI) model that incorporates filtration, geochemical reaction, molecular transport, and mass adsorption kinetics was developed to predict cake deposition and injectivity performance in hydrocarbon aquifers in Nigeria oil fields. Thus, the improved PWRI model considered contributions of geochemical reaction, adsorption kinetics, and hydrodynamic molecular dispersion mechanism to alter the injectivity and deposition of suspended solids on aquifer wall resulting in cake formation in pores during PWRI and transport of active constituents in hydrocarbon reservoirs. The injectivity decline and cake deposition for specific case studies of hydrocarbon aquifers in Nigeria oil fields were characterized with respect to its well geometry, lithology, and calibrations data and simulated in COMSOL multiphysics software environment. The PWRI model was validated by comparisons to assessments of previous field studies based on data and results supplied by operator and regulator. The results of simulation showed that PWRI performance was altered because of temporal variations and declinations of permeability, injectivity, and cake precipitation, which were observed to be dependent on active adsorption and geochemical reaction kinetics coupled with filtration scheme and molecular dispersion. From the observed results and findings, transition time t r to cake nucleation and growth were dependent on aquifer constituents, well capacity, filtration coefficients, particle-to-grain size ratio, water quality, and more importantly, particle-to-grain adsorption kinetics. Thus, the results showed that injectivity decline and permeability damage were direct contributions of geochemical reaction, hydrodynamic molecular diffusion, and adsorption kinetics to the internal filtration mechanism, which are largely dependent on the initial conditions of concentration of active constituents of produced water and aquifer capacity.

  4. Real-data Calibration Experiments On A Distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Brath, A.; Montanari, A.; Toth, E.

    The increasing availability of extended information on the study watersheds does not generally overcome the need for the determination through calibration of at least a part of the parameters of distributed hydrologic models. The complexity of such models, making the computations highly intensive, has often prevented an extensive analysis of calibration issues. The purpose of this study is an evaluation of the validation results of a series of automatic calibration experiments (using the shuffled complex evolu- tion method, Duan et al., 1992) performed with a highly conceptualised, continuously simulating, distributed hydrologic model applied on the real data of a mid-sized Ital- ian watershed. Major flood events occurred in the 1990-2000 decade are simulated with the parameters obtained by the calibration of the model against discharge data observed at the closure section of the watershed and the hydrological features (overall agreement, volumes, peaks and times to peak) of the discharges obtained both in the closure and in an interior stream-gauge are analysed for validation purposes. A first set of calibrations investigates the effect of the variability of the calibration periods, using the data from several single flood events and from longer, continuous periods. Another analysis regards the influence of rainfall input and it is carried out varying the size and distribution of the raingauge network, in order to examine the relation between the spatial pattern of observed rainfall and the variability of modelled runoff. Lastly, a comparison of the hydrographs obtained for the flood events with the model parameterisation resulting when modifying the objective function to be minimised in the automatic calibration procedure is presented.

  5. Thermodynamic model of Mars Oxygen ISRU Experiment (MOXIE)

    NASA Astrophysics Data System (ADS)

    Meyen, Forrest E.; Hecht, Michael H.; Hoffman, Jeffrey A.

    2016-12-01

    As humankind expands its footprint in the solar system, it is increasingly important to make use of the resources already in our solar system to make these missions economically feasible and sustainable. In-Situ Resource Utilization (ISRU), the science of using resources at a destination to support exploration missions, unlocks potential destinations by significantly reducing the amount of resources that need to be launched from Earth. Carbon dioxide is an example of an in-situ resource that comprises 96% of the Martian atmosphere and can be used as a source of oxygen for propellant and life support systems. The Mars Oxygen ISRU Experiment (MOXIE) is a payload being developed for NASA's upcoming Mars 2020 rover. MOXIE will produce oxygen from the Martian atmosphere using solid oxide electrolysis (SOXE). MOXIE is on the order of magnitude of a 1% scale model of an oxygen processing plant that might enable a human expedition to Mars in the 2030s through the production of the oxygen needed for the propellant of a Mars ascent vehicle. MOXIE is essentially an energy conversion system that draws energy from the Mars 2020 rover's radioisotope thermoelectric generator and ultimately converts it to stored energy in oxygen and carbon monoxide molecules. A thermodynamic model of this novel system is used to understand this process in order to derive operating parameters for the experiment. This paper specifically describes the model of the SOXE component. Assumptions and idealizations are addressed, including 1D and 2D simplifications. Operating points are discussed as well as impacts of flow rates and production.

  6. Influence of fresh date palm co-products on the ripening of a paprika added dry-cured sausage model system.

    PubMed

    Martín-Sánchez, Ana María; Ciro-Gómez, Gelmy; Vilella-Esplá, José; Ben-Abda, Jamel; Pérez-Álvarez, José Ángel; Sayas-Barberá, Estrella

    2014-06-01

    Date palm co-products are a source of bioactive compounds that could be used as a new ingredient for the meat industry. An intermediate food product (IFP) from date palm co-products (5%) was incorporated into a paprika added dry-cured sausage (PADS) model system and was analysed for physicochemical parameters, lipid oxidation and sensory attributes during ripening. Addition of 5% IFP yielded a product with physicochemical properties similar to the traditional one. Instrumental colour differences were found, but were not detected visually by panellists, who also evaluated positively the sensory properties of the PADS with IFP. Therefore, the IFP from date palm co-products could be used as a natural ingredient in the formulation of PADS.

  7. Hyperspectral imaging technique for determination of pork freshness attributes

    NASA Astrophysics Data System (ADS)

    Li, Yongyu; Zhang, Leilei; Peng, Yankun; Tang, Xiuying; Chao, Kuanglin; Dhakal, Sagar

    2011-06-01

    Freshness of pork is an important quality attribute, which can vary greatly in storage and logistics. The specific objectives of this research were to develop a hyperspectral imaging system to predict pork freshness based on quality attributes such as total volatile basic-nitrogen (TVB-N), pH value and color parameters (L*,a*,b*). Pork samples were packed in seal plastic bags and then stored at 4°C. Every 12 hours. Hyperspectral scattering images were collected from the pork surface at the range of 400 nm to 1100 nm. Two different methods were performed to extract scattering feature spectra from the hyperspectral scattering images. First, the spectral scattering profiles at individual wavelengths were fitted accurately by a three-parameter Lorentzian distribution (LD) function; second, reflectance spectra were extracted from the scattering images. Partial Least Square Regression (PLSR) method was used to establish prediction models to predict pork freshness. The results showed that the PLSR models based on reflectance spectra was better than combinations of LD "parameter spectra" in prediction of TVB-N with a correlation coefficient (r) = 0.90, a standard error of prediction (SEP) = 7.80 mg/100g. Moreover, a prediction model for pork freshness was established by using a combination of TVB-N, pH and color parameters. It could give a good prediction results with r = 0.91 for pork freshness. The research demonstrated that hyperspectral scattering technique is a valid tool for real-time and nondestructive detection of pork freshness.

  8. The Mechanics of Neutrophils: Synthetic Modeling of Three Experiments

    PubMed Central

    Herant, Marc; Marganski, William A.; Dembo, Micah

    2003-01-01

    Much experimental data exist on the mechanical properties of neutrophils, but so far, they have mostly been approached within the framework of liquid droplet models. This has two main drawbacks: 1), It treats the cytoplasm as a single phase when in reality, it is a composite of cytosol and cytoskeleton; and 2), It does not address the problem of active neutrophil deformation and force generation. To fill these lacunae, we develop here a comprehensive continuum-mechanical paradigm of the neutrophil that includes proper treatment of the membrane, cytosol, and cytoskeleton components. We further introduce two models of active force production: a cytoskeletal swelling force and a polymerization force. Armed with these tools, we present computer simulations of three classic experiments: the passive aspiration of a neutrophil into a micropipette, the active extension of a pseudopod by a neutrophil exposed to a local stimulus, and the crawling of a neutrophil inside a micropipette toward a chemoattractant against a varying counterpressure. Principal results include: 1), Membrane cortical tension is a global property of the neutrophil that is affected by local area-increasing shape changes. We argue that there exists an area dilation viscosity caused by the work of unfurling membrane-storing wrinkles and that this viscosity is responsible for much of the regulation of neutrophil deformation. 2), If there is no swelling force of the cytoskeleton, then it must be endowed with a strong cohesive elasticity to prevent phase separation from the cytosol during vigorous suction into a capillary tube. 3), We find that both swelling and polymerization force models are able to provide a unifying fit to the experimental data for the three experiments. However, force production required in the polymerization model is beyond what is expected from a simple short-range Brownian ratchet model. 4), It appears that, in the crawling of neutrophils or other amoeboid cells inside a micropipette

  9. Microcosm Experiments and Modeling of Microbial Movement Under Unsaturated Conditions

    SciTech Connect

    Brockman, F.J.; Kapadia, N.; Williams, G.; Rockhold, M.

    2006-04-05

    Colonization of bacteria in porous media has been studied primarily in saturated systems. In this study we examine how microbial colonization in unsaturated porous media is controlled by water content and particle size. This is important for understanding the feasibility and success of bioremediation via nutrient delivery when contaminant degraders are at low densities and when total microbial populations are sparse and spatially discontinuous. The study design used 4 different sand sizes, each at 4 different water contents; experiments were run with and without acetate as the sole carbon source. All experiments were run in duplicate columns and used the motile organism Pseudomonas stutzeri strain KC, a carbon tetrachloride degrader. At a given sand size, bacteria traveled further with increasing volumetric water content. At a given volumetric water content, bacteria generally traveled further with increasing sand size. Water redistribution, solute transport, gas diffusion, and bacterial colonization dynamics were simulated using a numerical finite-difference model. Solute and bacterial transport were modeled using advection-dispersion equations, with reaction rate source/sink terms to account for bacterial growth and substrate utilization, represented using dual Monod-type kinetics. Oxygen transport and diffusion was modeled accounting for equilibrium partitioning between the aqueous and gas phases. The movement of bacteria in the aqueous phase was modeled using a linear impedance model in which the term D{sub m} is a coefficient, as used by Barton and Ford (1995), representing random motility. The unsaturated random motility coefficients we obtained (1.4 x 10{sup -6} to 2.8 x 10{sup -5} cm{sup 2}/sec) are in the same range as those found by others for saturated systems (3.5 x 10{sup -6} to 3.5 x 10{sup -5} cm{sup 2}/sec). The results show that some bacteria can rapidly migrate in well sorted unsaturated sands (and perhaps in relatively high porosity, poorly

  10. The mechanics of neutrophils: synthetic modeling of three experiments.

    PubMed

    Herant, Marc; Marganski, William A; Dembo, Micah

    2003-05-01

    Much experimental data exist on the mechanical properties of neutrophils, but so far, they have mostly been approached within the framework of liquid droplet models. This has two main drawbacks: 1), It treats the cytoplasm as a single phase when in reality, it is a composite of cytosol and cytoskeleton; and 2), It does not address the problem of active neutrophil deformation and force generation. To fill these lacunae, we develop here a comprehensive continuum-mechanical paradigm of the neutrophil that includes proper treatment of the membrane, cytosol, and cytoskeleton components. We further introduce two models of active force production: a cytoskeletal swelling force and a polymerization force. Armed with these tools, we present computer simulations of three classic experiments: the passive aspiration of a neutrophil into a micropipette, the active extension of a pseudopod by a neutrophil exposed to a local stimulus, and the crawling of a neutrophil inside a micropipette toward a chemoattractant against a varying counterpressure. Principal results include: 1), Membrane cortical tension is a global property of the neutrophil that is affected by local area-increasing shape changes. We argue that there exists an area dilation viscosity caused by the work of unfurling membrane-storing wrinkles and that this viscosity is responsible for much of the regulation of neutrophil deformation. 2), If there is no swelling force of the cytoskeleton, then it must be endowed with a strong cohesive elasticity to prevent phase separation from the cytosol during vigorous suction into a capillary tube. 3), We find that both swelling and polymerization force models are able to provide a unifying fit to the experimental data for the three experiments. However, force production required in the polymerization model is beyond what is expected from a simple short-range Brownian ratchet model. 4), It appears that, in the crawling of neutrophils or other amoeboid cells inside a micropipette

  11. Edge effect modeling and experiments on active lap processing.

    PubMed

    Liu, Haitao; Wu, Fan; Zeng, Zhige; Fan, Bin; Wan, Yongjian

    2014-05-05

    Edge effect is regarded as one of the most difficult technical issues for fabricating large primary mirrors, especially for large polishing tools. Computer controlled active lap (CCAL) uses a large size pad (e.g., 1/3 to 1/5 workpiece diameters) to grind and polish the primary mirror. Edge effect also exists in the CCAL process in our previous fabrication. In this paper the material removal rules when edge effects happen (i.e. edge tool influence functions (TIFs)) are obtained through experiments, which are carried out on a Φ1090-mm circular flat mirror with a 375-mm-diameter lap. Two methods are proposed to model the edge TIFs for CCAL. One is adopting the pressure distribution which is calculated based on the finite element analysis method. The other is building up a parametric equivalent pressure model to fit the removed material curve directly. Experimental results show that these two methods both effectively model the edge TIF of CCAL.

  12. Atmospheric Climate Model Experiments Performed at Multiple Horizontal Resolutions

    SciTech Connect

    Phillips, T; Bala, G; Gleckler, P; Lobell, D; Mirin, A; Maxwell, R; Rotman, D

    2007-12-21

    This report documents salient features of version 3.3 of the Community Atmosphere Model (CAM3.3) and of three climate simulations in which the resolution of its latitude-longitude grid was systematically increased. For all these simulations of global atmospheric climate during the period 1980-1999, observed monthly ocean surface temperatures and sea ice extents were prescribed according to standard Atmospheric Model Intercomparison Project (AMIP) values. These CAM3.3 resolution experiments served as control runs for subsequent simulations of the climatic effects of agricultural irrigation, the focus of a Laboratory Directed Research and Development (LDRD) project. The CAM3.3 model was able to replicate basic features of the historical climate, although biases in a number of atmospheric variables were evident. Increasing horizontal resolution also generally failed to ameliorate the large-scale errors in most of the climate variables that could be compared with observations. A notable exception was the simulation of precipitation, which incrementally improved with increasing resolution, especially in regions where orography plays a central role in determining the local hydroclimate.

  13. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  14. Influence of resuscitation fluids, fresh frozen plasma and antifibrinolytics on fibrinolysis in a thrombelastography-based, in-vitro, whole-blood model.

    PubMed

    Kostousov, Vadim; Wang, Yao-Wei W; Cotton, Bryan A; Wade, Charles E; Holcomb, John B; Matijevic, Nena

    2013-07-01

    Hyperfibrinolysis has been identified as a mechanism of trauma coagulopathy associated with poor outcome. The aim of the study was to create a trauma coagulopathy model (TCM) with a hyperfibrinolysis thrombelastography (TEG) pattern similar to injured patients and test the effects of different resuscitation fluids and antifibrinolytics on fibrinolysis. TCM was established from whole blood by either 15% dilution with isotonic saline, lactated Ringer's, Plasma-Lyte, 5% albumin, Voluven, Hextend, 6% dextran in isotonic saline or 30% dilution with lactated Ringer's plus Voluven and supplementation with tissue factor and tissue plasminogen activator (tPA). These combinations resulted in a TCM that could then be 'treated' with tranexamic acid (TXA) or 6-aminocaproic acid (ACA). Clot formation was evaluated by TEG. Whole-blood dilution by 15% with crystalloids and albumin in the presence of tissue factor plus tPA resulted in an abnormal TEG pattern and increased fibrinolysis, as did dilution with synthetic colloids. TXA 1 μg/ml or ACA 10 μg/ml were sufficient to suppress fibrinolysis when TCM was diluted 15% with lactated Ringer's, but 3 μg/ml of TXA or 30 μg/ml of ACA were needed for fibrinolysis inhibition induced by simultaneous euvolemic dilution with lactated Ringer's plus Voluven by 30%. A total of 15% dilution of whole blood in the presence of tissue factor plus tPA results in a hyperfibrinolysis TEG pattern similar to that observed in severely injured patients. Synthetic colloids worsen TEG variables with a further increase of fibrinolysis. Low concentrations of TXA or ACA reversed hyperfibrinolysis, but the efficient concentrations were dependent on the degree of fibrinolysis and whole-blood dilution.

  15. Development of Supersonic Combustion Experiments for CFD Modeling

    NASA Technical Reports Server (NTRS)

    Baurle, Robert; Bivolaru, Daniel; Tedder, Sarah; Danehy, Paul M.; Cutler, Andrew D.; Magnotti, Gaetano

    2007-01-01

    This paper describes the development of an experiment to acquire data for developing and validating computational fluid dynamics (CFD) models for turbulence in supersonic combusting flows. The intent is that the flow field would be simple yet relevant to flows within hypersonic air-breathing engine combustors undergoing testing in vitiated-air ground-testing facilities. Specifically, it describes development of laboratory-scale hardware to produce a supersonic combusting coaxial jet, discusses design calculations, operability and types of flames observed. These flames are studied using the dual-pump coherent anti- Stokes Raman spectroscopy (CARS) - interferometric Rayleigh scattering (IRS) technique. This technique simultaneously and instantaneously measures temperature, composition, and velocity in the flow, from which many of the important turbulence statistics can be found. Some preliminary CARS data are presented.

  16. Electrostatic Model Applied to ISS Charged Water Droplet Experiment

    NASA Technical Reports Server (NTRS)

    Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.

    2015-01-01

    The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic model developed to study active spacecraft position control by Coulomb charging, is used to simulate the complex orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the experiment. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.

  17. Atomic detection in microwave cavity experiments: A dynamical model

    SciTech Connect

    Rossi, R. Jr.; Nemes, M. C.; Peixoto de Faria, J. G.

    2007-06-15

    We construct a model for the atomic detection in the context of cavity quantum electrodynamics (QED) used to study coherence properties of superpositions of states of an electromagnetic mode. Analytic expressions for the atomic ionization are obtained, considering the imperfections of the measurement process due to the probabilistic nature of the interactions between the ionization field and the atoms. We provide for a dynamical content for the available expressions for the counting rates considering limited efficiency of detectors. Moreover, we include false countings. The influence of these imperfections on the information about the state of the cavity mode is obtained. In order to test the adequacy of our approach, we investigate a recent experiment reported by Maitre [X. Maitre et al., Phys. Rev. Lett. 79, 769 (1997)] and we obtain excellent agreement with the experimental results.

  18. Numerically Modeling Pulsed-Current, Kinked Wire Experiments

    NASA Astrophysics Data System (ADS)

    Filbey, Gordon; Kingman, Pat

    1999-06-01

    The U.S. Army Research Laboratory (ARL) has embarked on a program to provide far-term land fighting vehicles with electromagnetic armor protection. Part of this work seeks to establish robust simulations of magneto-solid-mechanics phenomena. Whether describing violent rupture of a fuse link resulting from a large current pulse or the complete disruption of a copper shaped-charge jet subjected to high current densities, the simulations must include effects of intense Lorentz body forces and rapid Ohmic heating. Material models are required that describe plasticity, flow and fracture, conductivity, and equation of state (EOS) parameters for media in solid, liquid, and vapor phases. An extended version of the Eulerian wave code CTH has been used to predict the apex motion of a V-shaped (``kinked'') copper wire 3mm in diameter during a 400 kilo-amp pulse. These predictions, utilizing available material, EOS, and conductivity data for copper and the known characteristics of an existing capacitor-bank pulsed power supply, were then used to configure an experiment. The experiments were in excellent agreement with the prior simulations. Both computational and experimental results (including electrical data and flash X-rays) will be presented.

  19. Experiments and modeling of iron-particle-filled magnetorheological elastomers

    NASA Astrophysics Data System (ADS)

    Danas, K.; Kankanala, S. V.; Triantafyllidis, N.

    2012-01-01

    Magnetorheological elastomers (MREs) are ferromagnetic particle impregnated rubbers whose mechanical properties are altered by the application of external magnetic fields. Due to their coupled magnetoelastic response, MREs are finding an increasing number of engineering applications. In this work, we present a combined experimental and theoretical study of the macroscopic response of a particular MRE consisting of a rubber matrix phase with spherical carbonyl iron particles. The MRE specimens used in this work are cured in the presence of strong magnetic fields leading to the formation of particle chain structures and thus to an overall transversely isotropic composite. The MRE samples are tested experimentally under uniaxial stresses as well as under simple shear in the absence or in the presence of magnetic fields and for different initial orientations of their particle chains with respect to the mechanical and magnetic loading direction. Using the theoretical framework for finitely strained MREs introduced by Kankanala and Triantafyllidis (2004), we propose a transversely isotropic energy density function that is able to reproduce the experimentally measured magnetization, magnetostriction and simple shear curves under different prestresses, initial particle chain orientations and magnetic fields. Microscopic mechanisms are also proposed to explain (i) the counterintuitive effect of dilation under zero or compressive applied mechanical loads for the magnetostriction experiments and (ii) the importance of a finite strain constitutive formulation even at small magnetostrictive strains. The model gives an excellent agreement with experiments for relatively moderate magnetic fields but has also been satisfactorily extended to include magnetic fields near saturation.

  20. How Magnus Bends the Flying Ball - Experimenting and Modeling

    NASA Astrophysics Data System (ADS)

    Timková, V.; Ješková, Z.

    2017-02-01

    Students are well aware of the effect of the deflection of sports balls when they have been given a spin. A volleyball, tennis, or table tennis ball served with topspin results in an additional downward force that makes the ball difficult to catch and return. In soccer, the effect of sidespin causes the ball to curve unexpectedly sideways, resulting in a so-called banana kick that can confuse the goalkeeper. These surprising effects attract students' attention such that the motion of sports balls can be used to capture the interest of students towards the physics behind it. However, to study and analyze the motion of a real ball kicked in a playfield is not an easy task. Instead of the large-scale full-size sports ball motion, there can be designed and studied simpler experiments that can be carried out in the classroom. Moreover, digital technologies that are available at schools enable students to collect data from the experiment easily in a reasonable time. The mathematical model based on the analysis of forces acting on the ball flying in the air can be used to simulate the motion in order to understand the basic physical principles of the motion so that the best correspondence may be found.

  1. Numerical Modelling of the Deep Impact Mission Experiment

    NASA Technical Reports Server (NTRS)

    Wuennemann, K.; Collins, G. S.; Melosh, H. J.

    2005-01-01

    NASA s Deep Impact Mission (launched January 2005) will provide, for the first time ever, insights into the interior of a comet (Tempel 1) by shooting a approx.370 kg projectile onto the surface of a comets nucleus. Although it is usually assumed that comets consist of a very porous mixture of water ice and rock, little is known about the internal structure and in particular the constitutive material properties of a comet. It is therefore difficult to predict the dimensions of the excavated crater. Estimates of the crater size are based on laboratory experiments of impacts into various target compositions of different densities and porosities using appropriate scaling laws; they range between 10 s of meters up to 250 m in diameter [1]. The size of the crater depends mainly on the physical process(es) that govern formation: Smaller sizes are expected if (1) strength, rather than gravity, limits crater growth; and, perhaps even more crucially, if (2) internal energy losses by pore-space collapse reduce the coupling efficiency (compaction craters). To investigate the effect of pore space collapse and strength of the target we conducted a suite of numerical experiments and implemented a novel approach for modeling porosity and the compaction of pores in hydrocode calculations.

  2. Ignition and Growth Modeling of LX-17 Hockey Puck Experiments

    SciTech Connect

    Tarver, C M

    2004-04-19

    Detonating solid plastic bonded explosives (PBX) formulated with the insensitive molecule triaminotrinitrobenzene (TATB) exhibit measurable reaction zone lengths, curved shock fronts, and regions of failing chemical reaction at abrupt changes in the charge geometry. A recent set of ''hockey puck'' experiments measured the breakout times of diverging detonation waves in ambient temperature LX-17 (92.5 % TATB plus 7.5% Kel-F binder) and the breakout times at the lower surfaces of 15 mm thick LX-17 discs placed below the detonator-booster plane. The LX-17 detonation waves in these discs grow outward from the initial wave leaving regions of unreacted or partially reacted TATB in the corners of these charges. This new experimental data is accurately simulated for the first time using the Ignition and Growth reactive flow model for LX-17, which is normalized to a great deal of detonation reaction zone, failure diameter and diverging detonation data. A pressure cubed dependence for the main growth of reaction rate yields excellent agreement with experiment, while a pressure squared rate diverges too quickly and a pressure quadrupled rate diverges too slowly in the LX-17 below the booster equatorial plane.

  3. Dynamical experiments on models of colliding disk galaxies

    NASA Technical Reports Server (NTRS)

    Gerber, Richard A.; Balsara, Dinshaw S.; Lamb, Susan A.

    1990-01-01

    Collisions between galaxies can induce large morphological changes in the participants and, in the case of colliding disk galaxies, bridges and tails are often formed. Observations of such systems indicate a wide variation in color (see Larson and Tinsley, 1978) and that some of the particpants are experiencing enhanced rates of star formation, especially in their central regions (Bushouse 1986, 1987; Kennicutt et al., 1987, Bushouse, Lamb, and Werner, 1988). Here the authors describe progress made in understanding some of the dynamics of interacting galaxies using N-body stellar dynamical computer experiments, with the goal of extending these models to include a hydrodynamical treatment of the gas so that a better understanding of globally enhanced star formation will eventually be forthcoming. It was concluded that close interactions between galaxies can produce large perturbations in both density and velocity fields. The authors measured, via computational experiments that represent a galaxy's stars, average radial velocity flows as large as 100 km/sec and 400 percent density increases. These can occur in rings that move outwards through the disk of a galaxy, in roughly homologous inflows toward the nucleus, and in off center, non-axisymmetric regions. Here the authors illustrate where the gas is likely to flow during the early stages of interaction and in future work they plan to investigate the fate of the gas more realistically by using an N-body/Smoothed Particle Hydrodynamics code to model both the stellar and gaseous components of a disk galaxy during a collision. Specifically, they will determine the locations of enhanced gas density and the strength and location of shock fronts that form during the interaction.

  4. Structure Modeling and Validation applied to Source Physics Experiments (SPEs)

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rowe, C. A.; Patton, H. J.

    2012-12-01

    The U. S. Department of Energy's Source Physics Experiments (SPEs) comprise a series of small chemical explosions used to develop a better understanding of seismic energy generation and wave propagation for low-yield explosions. In particular, we anticipate improved understanding of the processes through which shear waves are generated by the explosion source. Three tests, 100, 1000 and 1000 kg yields respectively, were detonated in the same emplacement hole and recorded on the same networks of ground motion sensors in the granites of Climax Stock at the Nevada National Security Site. We present results for the analysis and modeling of seismic waveforms recorded close-in on five linear geophone lines extending radially from ground zero, having offsets from 100 to 2000 m and station spacing of 100 m. These records exhibit azimuthal variations of P-wave arrival times, and phase velocity, spreading and attenuation properties of high-frequency Rg waves. We construct a 1D seismic body-wave model starting from a refraction analysis of P-waves and adjusting to address time-domain and frequency-domain dispersion measurements of Rg waves between 2 and 9 Hz. The shallowest part of the structure we address using the arrival times recorded by near-field accelerometers residing within 200 m of the shot hole. We additionally perform a 2D modeling study with the Spectral Element Method (SEM) to investigate which structural features are most responsible for the observed variations, in particular anomalously weak amplitude decay in some directions of this topographically complicated locality. We find that a near-surface, thin, weathered layer of varying thickness and low wave speeds plays a major role on the observed waveforms. We anticipate performing full 3D modeling of the seismic near-field through analysis and validation of waveforms on the 5 radial receiver arrays.

  5. Systematic Study of the Content of Phytochemicals in Fresh and Fresh-Cut Vegetables

    PubMed Central

    Alarcón-Flores, María Isabel; Romero-González, Roberto; Martínez Vidal, José Luis; Garrido Frenich, Antonia

    2015-01-01

    Vegetables and fruits have beneficial properties for human health, because of the presence of phytochemicals, but their concentration can fluctuate throughout the year. A systematic study of the phytochemical content in tomato, eggplant, carrot, broccoli and grape (fresh and fresh-cut) has been performed at different seasons, using liquid chromatography coupled to triple quadrupole mass spectrometry. It was observed that phenolic acids (the predominant group in carrot, eggplant and tomato) were found at higher concentrations in fresh carrot than in fresh-cut carrot. However, in the case of eggplant, they were detected at a higher content in fresh-cut than in fresh samples. Regarding tomato, the differences in the content of phenolic acids between fresh and fresh-cut were lower than in other matrices, except in winter sampling, where this family was detected at the highest concentration in fresh tomato. In grape, the flavonols content (predominant group) was higher in fresh grape than in fresh-cut during all samplings. The content of glucosinolates was lower in fresh-cut broccoli than in fresh samples in winter and spring sampling, although this trend changes in summer and autumn. In summary, phytochemical concentration did show significant differences during one-year monitoring, and the families of phytochemicals presented different behaviors depending on the matrix studied. PMID:26783709

  6. Recent Experiences in Aftershock Hazard Modelling in New Zealand

    NASA Astrophysics Data System (ADS)

    Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.

    2014-12-01

    The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock

  7. Anomalous transport in fracture networks: field scale experiments and modelling

    NASA Astrophysics Data System (ADS)

    Kang, P. K.; Le Borgne, T.; Bour, O.; Dentz, M.; Juanes, R.

    2012-12-01

    Anomalous transport is widely observed in different settings and scales of transport through porous and fractured geologic media. A common signature of anomalous transport is the late-time power law tailing in breakthrough curves (BTCs) during tracer tests. Various conceptual models of anomalous transport have been proposed, including multirate mass transfer, continuous time random walk, and stream tube models. Since different conceptual models can produce equally good fits to a single BTC, tracer test interpretation has been plagued with ambiguity. Here, we propose to resolve such ambiguity by analyzing BTCs obtained from both convergent and push-pull flow configurations at two different fracture planes. We conducted field tracer tests in a fractured granite formation close to Ploemeur, France. We observe that BTC tailing depends on the flow configuration and the injection fracture. Specifically the tailing disappears under push-pull geometry, and when we injected at a fracture with high flux (Figure 1). This indicates that for this fractured granite, BTC tailing is controlled by heterogeneous advection and not by matrix diffusion. To explain the change in tailing behavior for different flow configurations, we employ a simple lattice network model with heterogeneous conductivity distribution. The model assigns random conductivities to the fractures and solves the Darcy equation for an incompressible fluid, enforcing mass conservation at fracture intersections. The mass conservation constraint yields a correlated random flow through the fracture system. We investigate whether BTC tailing can be explained by the spatial distribution of preferential flow paths and stagnation zones, which is controlled by the conductivity variance and correlation length. By combining the results from the field tests and numerical modeling, we show that the reversibility of spreading is a key mechanism that needs to be captured. We also demonstrate the dominant role of the injection

  8. Kinetic modeling of molecular motors: pause model and parameter determination from single-molecule experiments

    NASA Astrophysics Data System (ADS)

    Morin, José A.; Ibarra, Borja; Cao, Francisco J.

    2016-05-01

    Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model

  9. Solar model uncertainties, MSW analysis, and future solar neutrino experiments

    NASA Astrophysics Data System (ADS)

    Hata, Naoya; Langacker, Paul

    1994-07-01

    Various theoretical uncertainties in the standard solar model and in the Mikheyev-Smirnov-Wolfenstein (MSW) analysis are discussed. It is shown that two methods give consistent estimations of the solar neutrino flux uncertainties: (a) a simple parametrization of the uncertainties using the core temperature and the ncuelar production cross sections; (b) the Monte Carlo method of Bahcall and Ulrich. In the MSW analysis, we emphasize proper treatments of correlations of theoretical uncertainties between flux components and between different detectors, the Earth effect, and multiple solutions in a combined χ2 procedure. In particular the large-angle solution of the combined observation is allowed at 95% C.L. only when the theoretical uncertainties are included. If their correlations were ignored, the region would be overestimated. The MSW solutions for various standard and nonstandard solar models are also shown. The MSW predictions of the global solutions for the future solar neutrino experiments are given, emphasizing the measurement of the energy spectrum and the day-night effect in Sudbury Neutrino Observatory and Super-Kamiokande to distinguish the two solutions.

  10. Eulerian hydrocode modeling of a dynamic tensile extrusion experiment (u)

    SciTech Connect

    Burkett, Michael W; Clancy, Sean P

    2009-01-01

    Eulerian hydrocode simulations utilizing the Mechanical Threshold Stress flow stress model were performed to provide insight into a dynamic extrusion experiment. The dynamic extrusion response of copper (three different grain sizes) and tantalum spheres were simulated with MESA, an explicit, 2-D Eulerian continuum mechanics hydrocode and compared with experimental data. The experimental data consisted of high-speed images of the extrusion process, recovered extruded samples, and post test metallography. The hydrocode was developed to predict large-strain and high-strain-rate loading problems. Some of the features of the features of MESA include a high-order advection algorithm, a material interface tracking scheme and a van Leer monotonic advection-limiting. The Mechanical Threshold Stress (MTS) model was utilized to evolve the flow stress as a function of strain, strain rate and temperature for copper and tantalum. Plastic strains exceeding 300% were predicted in the extrusion of copper at 400 m/s, while plastic strains exceeding 800% were predicted for Ta. Quantitative comparisons between the predicted and measured deformation topologies and extrusion rate were made. Additionally, predictions of the texture evolution (based upon the deformation rate history and the rigid body rotations experienced by the copper during the extrusion process) were compared with the orientation imaging microscopy measurements. Finally, comparisons between the calculated and measured influence of the initial texture on the dynamic extrusion response of tantalum was performed.

  11. Experiments on an offshore platform model by FBG sensors

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Li, Hongnan; Ren, Liang; Sun, Li; Zhou, Jing

    2004-07-01

    Optical fiber sensors show superior potential for structural health monitoring of civil structures to ensure their structural integrity, durability and reliability. Apparent advantages of applying fiber optic sensors to a marine structure include fiber optic sensors" immunity of electromagnetic interference and electrical hazard when used near metallic elements over a long distance. The strains and accelerations of the newly proposed model of a single post jacket offshore platform were monitored by fiber Bragg grating (FBG) sensors. These FBG sensors were attached to the legs and the top of the platform model in parallel with electric strain gauges or traditional piezoelectric accelerometers, respectively. Experiments were conducted under a variety of loading conditions, including underwater base earthquake simulation dynamic tests and static loading tests. Underwater seismic shaking table was utilized to provide the appropriate excitations. The natural frequencies measured by the FBG accelerometer agree well with those measured by piezo-electrical accelerometers. The monitoring network shows the availability of applying different fiber optic sensors in long-distance structural health monitoring with frequency multiplexing technology. Finally, the existing problems of packaging, strain transferring ratio between the bare fiber and the host structure on which the fiber embedded, and installation and protection of fiber optic sensors are emphasized.

  12. Modelling of the Remote Fusion Cutting Process Based on Experiments

    NASA Astrophysics Data System (ADS)

    Kristiansen, Morten; Villumsen, Sigurd; Olsen, Flemming O.

    Remote fusion cutting (RFC) is an interesting industrial process compared to traditional laser cutting. It is because traditional laser cutting is limiting travel speed and accessibility due to the required positioning of the cutting head just above the workpiece for providing a cutting gas pressure. For RFC this pressure is created by the vapor, which is formed when the laser beam evaporates the cut material. The drawback of RFC compared to traditional laser cutting is a worse cut quality, wide cut kerf and a slower travel speed. The contribution of this paper is an experimental investigation, which determined the process window for RFC in stainless steel with a single mode fiber laser. The process variables: travel speed, focus position, power and sheet thickness were investigated. Based on the results of the experiments and process knowledge the aim of this work was to determine and describe the most important driving mechanisms for understanding and modelling the RFC process. The purpose is to deepen the understanding of the mechanisms in the process and find the factors, which can improve the performance and also determine the limitations. The validation results show that the developed model of the RFC process gives a similar process window as the experimental results for the tested parameters and variation of travel speed and focus position.

  13. The Resolution Dependence of Model Physics: Illustrations from Nonhydrostatic Model Experiments.

    NASA Astrophysics Data System (ADS)

    Jung, Joon-Hee; Arakawa, Akio

    2004-01-01

    The goal of this paper is to gain insight into the resolution dependence of model physics, the parameterization of moist convection in particular, which is required for accurately predicting large-scale features of the atmosphere. To achieve this goal, experiments using a two-dimensional nonhydrostatic model with different resolutions are conducted under various idealized tropical conditions. For control experiments (CONTROL), the model is run as a cloud-system-resolving model (CSRM). Next, a “large-scale dynamics model” (LSDM) is introduced as a diagnostic tool, which is a coarser-resolution version of the same model but with only partial or no physics. Then, the LSDM is applied to an ensemble of realizations selected from CONTROL and a “required parameterized source” (RPS) is identified for the results of the LSDM to become consistent with CONTROL as far as the resolvable scales are concerned.The analysis of RPS diagnosed in this way confirms that RPS is highly resolution dependent in the range of typical resolutions of mesoscale models even in ensemble/space averages, while “real source” (RS) is not. The time interval of implementing model physics also matters for RPS. It is emphasized that model physics in future prediction models should automatically produce these resolution dependencies so that the need for retuning parameterizations as resolution changes can be minimized.

  14. Induced polarization of clay-sand mixtures. Experiments and modelling.

    NASA Astrophysics Data System (ADS)

    Okay, G.; Leroy, P.

    2012-04-01

    The complex conductivity of saturated unconsolidated sand-clay mixtures was experimentally investigated using two types of clay minerals, kaolinite and smectite (mainly Na-Montmorillonite) in the frequency range 1.4 mHz - 12 kHz. The experiments were performed with various clay contents (1, 5, 20, and 100 % in volume of the sand-clay mixture) and salinities (distilled water, 0.1 g/L, 1 g/L, and 10 g/L NaCl solution). Induced polarization measurements were performed with a cylindrical four-electrode sample-holder associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO4 electrodes. The results illustrate the strong impact of the CEC of the clay minerals upon the complex conductivity. The quadrature conductivity increases steadily with the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents. The experimental data show good agreement with predicted values given by our SIP model. This complex conductivity model considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the complex conductivity of the porous medium constituted of the grains and the electrolyte. The SIP model includes also the effect of the grain size distribution upon the complex conductivity spectra.

  15. CFD Simulation of the distribution of ClO2 in fresh produce to improve safety

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The shelf life of fresh-cut produce may be prolonged with the injection of bactericide gases like chlorine dioxide (ClO2). A comparative study has been conducted by modeling the injection of three different gases, CO2, ClO2 and N2 inside a PET clamshell containers commonly use to package fresh produ...

  16. A New Model for Climate Science Research Experiences for Teachers

    NASA Astrophysics Data System (ADS)

    Hatheway, B.

    2012-12-01

    After two years of running a climate science teacher professional development program for secondary teachers, science educators from UCAR and UNC-Greeley have learned the benefits of providing teachers with ample time to interact with scientists, informal educators, and their teaching peers. Many programs that expose teachers to scientific research do a great job of energizing those teachers and getting them excited about how research is done. We decided to try out a twist on this model - instead of matching teachers with scientists and having them do science in the lab, we introduced the teachers to scientists who agreed share their data and answer questions as the teachers developed their own activities, curricula, and classroom materials related to the research. Prior to their summer experience, the teachers took three online courses on climate science, which increased their background knowledge and gave them an opportunity to ask higher-level questions of the scientists. By spending time with a cohort of practicing teachers, each individual had much needed time to interact with their peers, share ideas, collaborate on curriculum, and learn from each other. And because the goal of the program was to create classroom modules that could be implemented in the coming school year, the teachers were able to both learn about climate science research by interacting with scientists and visiting many different labs, and then create materials using data from the scientists. Without dedicated time for creating these classroom materials, it would have been up to the teachers to carve out time during the school year in order to find ways to apply what they learned in the research experience. We feel this approach worked better for the teachers, had a bigger impact on their students than we originally thought, and gave us a new approach to teacher professional development.

  17. MACH2 modeling of LANL plasma-flow-switch experiments

    SciTech Connect

    Wysocki, F.J.

    1994-12-31

    The plasma-flow opening-switch (PFS) is being developed at the Los Alamos National Laboratory as part of the Athena Program. The present goal is to switch 10--20 MA of current into a cylindrical-foil implosion load in 300--400 ns. Primary drivers currently in use include the Pegasus-II capacitor bank which delivers 8--10 MA to the PFS in 3--4 {mu}s and the Procyon explosively-driven flux-compression generator which delivers 15--18 MA in 2--3 {mu}s. A series of experiments using Pegasus-II and Procyon have characterized the PFS performance for a variety of experimental conditions. Issues examined with Pegasus-II include switch-mass (50-mg vs. 100-mg), switch fabrication (wire-array vs. graded-thickness-foil), current level (7 MA vs. 10 MA), presence or absence of a plasma trap, and static load vs. implosion load. Procyon has been used to characterize a PFS with a 1/r aerial-mass-density profile (as opposed to the Pegasus-II 1/r{sup 2} profile). The MACH2 two-dimensional magnetohydrodynamic code has been used to model these experiments and comparison of simulation data to the experimental data has been made. This includes direct comparison of data from an array of B-dot probes present on all tests (19--23 probes), direct comparison of x-ray yield and power for those tests with implosion loads, and qualitative comparison to framing and streak data. The agreement between simulation data and experimental data is reasonably good.

  18. Pork loin quality is not indicative of fresh belly or fresh and cured ham quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective was to characterize the relationship between fresh 30 loin quality and with fresh belly or fresh and cured ham quality. Pigs raised in 8 barns representing two seasons [cold (n = 4,290) and hot (n = 3,394)] and two production focuses [lean (n = 3,627) and quality n = 4,057)] were used....

  19. DISPERSIBILITY OF CRUDE OIL IN FRESH WATER

    EPA Science Inventory

    The effects of surfactant composition on the ability of chemical dispersants to disperse crude oil in fresh water were investigated. The objective of this research was to determine whether effective fresh water dispersants can be designed in case this technology is ever consider...

  20. Experiments in Chemistry: A Model Science Software Tool.

    ERIC Educational Resources Information Center

    Malone, Diana; Tinker, Robert

    1984-01-01

    Describes "Experiments in Chemistry," in which experiments are performed using software and hardware interfaced to the Apple microcomputer's game paddle port. Experiments include temperature, pH electrode, and EMF (cell potential determinations, oxidation-reduction titrations, and precipitation titrations) investigations. (JN)

  1. Postharvest treatments for the reduction of mancozeb in fresh apples.

    PubMed

    Hwang, E S; Cash, J N; Zabik, M J

    2001-06-01

    The objective of this study was to determine the effectiveness of chlorine, chlorine dioxide, ozone, and hydrogen peroxyacetic acid (HPA) treatments on the degradation of mancozeb and ethylenethiourea (ETU) in apples. This study was based on model experiments at neutral pH and temperature. Fresh apples were treated with two different levels of mancozeb (1 and 10 microg/mL). Several of the treatments were effective in reducing or removing mancozeb and ETU residues on spiked apples. Mancozeb residues decreased 56-99% with chlorine and 36-87% with chlorine dioxide treatments. ETU was completely degraded by 500 ppm of calcium hypochlorite and 10 ppm of chlorine dioxide at a 1 ppm spike level. However, at a 10 ppm spike level, the effectiveness of ETU degradation was lower than observed at 1 ppm level. Mancozeb residues decreased 56-97% with ozone treatment. At 1 and 3 ppm of ozone, no ETU residue was detected at 1 ppm of spiked mancozeb after both 3 and 30 min. HPA was also effective in degrading the mancozeb residues, with 44-99% reduction depending on treatment time and HPA concentrations. ETU was completely degraded at 500 ppm of HPA after 30 min of reaction time. These treatments indicated good potential for the removal of pesticide residues on fruit and in processed products.

  2. Recovering fresh water stored in saline limestone aquifers.

    USGS Publications Warehouse

    Merritt, M.L.

    1986-01-01

    Numerical modeling techniques are used to examine the hydrogeologic, design, and management factors governing the recovery efficiency of subsurface fresh-water storage. The modeling approach permitted many combinations of conditions to be studied. A sensitivity analysis was used that consisted of varying certain parameters while keeping constant as many other parameters or processes as possible. The results show that a loss of recovery efficiency resulted from: 1) processes causing mixing of injected fresh water with native saline water (hydrodynamic dispersion); 2) processes or conditions causing the irreversible displacement of the injected fresh water with respect to the well (buoyancy stratification and background hydraulic gradients); or 3) processes or procedures causing injection and withdrawal flow patterns to be dissimilar (dissimilar injection and withdrawal schedules in multiple-well systems). Other results indicated that recovery efficiency improved considerably during the first several successive cycles, provided that each recovery phase ended whgen the chloride concentration of withdrawn water exceeded established criteria for potability (usually 250 milligrams per liter). Other findings were that fresh water injected into highly permeable or highly saline aquifers would buoy rapidly with a deleterious effect on recovery efficiency. -Author

  3. COUNTERCURRENT FLOW LIMITATION EXPERIMENTS AND MODELING FOR IMPROVED REACTOR SAFETY

    SciTech Connect

    Vierow, Karen

    2008-09-26

    This project is investigating countercurrent flow and “flooding” phenomena in light water reactor systems to improve reactor safety of current and future reactors. To better understand the occurrence of flooding in the surge line geometry of a PWR, two experimental programs were performed. In the first, a test facility with an acrylic test section provided visual data on flooding for air-water systems in large diameter tubes. This test section also allowed for development of techniques to form an annular liquid film along the inner surface of the “surge line” and other techniques which would be difficult to verify in an opaque test section. Based on experiences in the air-water testing and the improved understanding of flooding phenomena, two series of tests were conducted in a large-diameter, stainless steel test section. Air-water test results and steam-water test results were directly compared to note the effect of condensation. Results indicate that, as for smaller diameter tubes, the flooding phenomena is predominantly driven by the hydrodynamics. Tests with the test sections inclined were attempted but the annular film was easily disrupted. A theoretical model for steam venting from inclined tubes is proposed herein and validated against air-water data. Empirical correlations were proposed for air-water and steam-water data. Methods for developing analytical models of the air-water and steam-water systems are discussed, as is the applicability of the current data to the surge line conditions. This report documents the project results from July 1, 2005 through June 30, 2008.

  4. Hydrogeochemical tool to identify salinization or freshening of coastal aquifers determined from combined field work, experiments, and modeling.

    PubMed

    Russak, Amos; Sivan, Orit

    2010-06-01

    This study proposes a hydrogeochemical tool to distinguish between salinization and freshening events of a coastal aquifer and quantifies their effect on groundwater characteristics. This is based on the chemical composition of the fresh-saline water interface (FSI) determined from combined field work, column experiments with the same sediments, and modeling. The experimental results were modeled using the PHREEQC code and were compared to field data from the coastal aquifer of Israel. The decrease in the isotopic composition of the dissolved inorganic carbon (delta(13)C(DIC)) of the saline water indicates that, during seawater intrusion and coastal salinization, oxidation of organic carbon occurs. However, the main process operating during salinization or freshening events in coastal aquifers is cation exchange. The relative changes in Ca(2+), Sr(2+), and K(+) concentrations during salinization and freshening events are used as a reliable tool for characterizing the status of a coastal aquifer. The field data suggest that coastal aquifers may switch from freshening to salinization on a seasonal time scale.

  5. Modeling orbital gamma-ray spectroscopy experiments at carbonaceous asteroids

    NASA Astrophysics Data System (ADS)

    Lim, Lucy F.; Starr, Richard D.; Evans, Larry G.; Parsons, Ann M.; Zolensky, Michael E.; Boynton, William V.

    2017-01-01

    To evaluate the feasibility of measuring differences in bulk composition among carbonaceous meteorite parent bodies from an asteroid or comet orbiter, we present the results of a performance simulation of an orbital gamma-ray spectroscopy (GRS) experiment in a Dawn-like orbit around spherical model asteroids with a range of carbonaceous compositions. The orbital altitude was held equal to the asteroid radius for 4.5 months. Both the asteroid gamma-ray spectrum and the spacecraft background flux were calculated using the MCNPX Monte-Carlo code. GRS is sensitive to depths below the optical surface (to ≈20-50 cm depth depending on material density). This technique can therefore measure underlying compositions beneath a sulfur-depleted (e.g., Nittler et al.) or desiccated surface layer. We find that 3σ uncertainties of under 1 wt% are achievable for H, C, O, Si, S, Fe, and Cl for five carbonaceous meteorite compositions using the heritage Mars Odyssey GRS design in a spacecraft-deck-mounted configuration at the Odyssey end-of-mission energy resolution, FWHM = 5.7 keV at 1332 keV. The calculated compositional uncertainties are smaller than the compositional differences between carbonaceous chondrite subclasses.

  6. Storytelling Voice Conversion: Evaluation Experiment Using Gaussian Mixture Models

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Ďuračková, Daniela

    2015-07-01

    In the development of the voice conversion and personification of the text-to-speech (TTS) systems, it is very necessary to have feedback information about the users' opinion on the resulting synthetic speech quality. Therefore, the main aim of the experiments described in this paper was to find out whether the classifier based on Gaussian mixture models (GMM) could be applied for evaluation of different storytelling voices created by transformation of the sentences generated by the Czech and Slovak TTS system. We suppose that it is possible to combine this GMM-based statistical evaluation with the classical one in the form of listening tests or it can replace them. The results obtained in this way were in good correlation with the results of the conventional listening test, so they confirm practical usability of the developed GMM classifier. With the help of the performed analysis, the optimal setting of the initial parameters and the structure of the input feature set for recognition of the storytelling voices was finally determined.

  7. Modeling of the 2007 JET ^13C migration experiments

    NASA Astrophysics Data System (ADS)

    Strachan, J. D.; Likonen, J.; Hakola, A.; Coad, J. P.; Widdowson, A.; Koivuranta, S.; Hole, D. E.; Rubel, M.

    2010-11-01

    Using the last run day of the 2007 JET experimental campaign, ^13CH4 was introduced repeatedly from the vessel top into a single plasma type (H-mode, Ip= 1.6 MA, Bt= 1.6 T). Similar experiments were performed in 2001 (vessel top into L-Mode) and 2004 (outer divertor into H-Mode). Divertor and wall tiles were removed and been analysed using secondary ion mass spectrometry (SIMS) and Rutherford backscattering (RBS) to determine the ^13C migration. ^13C was observed to migrate both to the inner (largest deposit), outer divertor (less) , and the floor tiles (least). This paper reports the EDGE2D/NIMBUS based modelling of the carbon migration. The emphasis is on the comparison of the 2007 results with the 2001 results where both injections were from the machine top but ELMs were present in 2007 but not present in 2001. The ELMs seemed to cause more ^13C re-erosion near the inner strike point. Also of interest is the difference in the Private Flux Region deposits where the changes in divertor geometry between 2004 and 2007 caused differences in the deposits. In 2007, the tilting of the load bearing tile caused regions of the PFR to be shadowed from the inner strike point which were not shadowed in 2004, indicating ^13C neutrals originated from the OSP.

  8. Computer models for designing hypertension experiments and studying concepts.

    PubMed

    Guyton, A C; Montani, J P; Hall, J E; Manning, R D

    1988-04-01

    This paper demonstrates how computer models along with animal experiments have been used to work out the conceptual bases of hypertensive mechanisms, especially the following: (1) The renal-fluid volume pressure control mechanism has a feed-back gain for pressure control of infinity. Therefore, the chronic level to which the arterial pressure is controlled can be changed only by altering this pressure control mechanism. (2) An increase in total peripheral resistance is not sufficient by itself to cause hypertension. The only resistances in the circulatory system that, when increased, will cause hypertension are those along a restricted axis from the root of the aorta to Bowman's capsule in the kidneys. (3) Autoregulation in the peripheral vascular beds does not increase the arterial pressure in hypertension. However, autoregulation can convert high cardiac output hypertension into high peripheral resistance hypertension. (4) In a computer simulation that cannot yet be performed in animals, a simulated hypertension caused by a combination of increased renal afferent and efferent arteriolar resistances has characteristics that match almost exactly those of essential hypertension.

  9. Model experiments on platelet adhesion in stagnation point flow.

    PubMed

    Wurzinger, L J; Blasberg, P; van de Loecht, M; Suwelack, W; Schmid-Schönbein, H

    1984-01-01

    Experiments with glass models of arterial branchings and bends, perfused with bovine platelet rich plasma (PRP), revealed platelet deposition being strongly dependent on fluid dynamic factors. Predilection sites of platelet deposits are characterized by flow vectors directed against the wall, so-called stagnation point flow. Thus collision of suspended particles with the wall, an absolute prerequisite for adhesion of platelets to surfaces even as thrombogenic as glass, appears mediated by convective forces. The extent of platelet deposition is correlated to the magnitude of flow components normal to the surface as well as to the state of biological activation of the platelets. The latter could be effective by an increase in hydrodynamically effective volume, invariably associated with the platelet shape change reaction to biochemical stimulants like ADP. The effect of altered rheological properties of platelets upon their deposition and of mechanical properties of surfaces was examined in a stagnation point flow chamber. Roughnesses in the order of 5 microns, probably by creating local flow disturbances, significantly enhance platelet adhesion, as compared to a smooth surface of identical chemical composition.

  10. Inactivation of Escherichia coli in fresh water with advanced oxidation processes based on the combination of O3, H2O2, and TiO2. Kinetic modeling.

    PubMed

    Rodríguez-Chueca, Jorge; Ormad Melero, M Peña; Mosteo Abad, Rosa; Esteban Finol, Javier; Ovelleiro Narvión, José Luis

    2015-07-01

    The purpose of this work was to study the efficiency of different treatments, based on the combination of O3, H2O2, and TiO2, on fresh surface water samples fortified with wild strains of Escherichia coli. Moreover, an exhaustive assessment of the influence of the different agents involved in the treatment has been carried out by kinetic modeling of E. coli inactivation results. The treatments studied were (i) ozonation (O3), (ii) the peroxone system (O3/0.04 mM H2O2), (iii) catalytic ozonation (O3/1 g/L TiO2), and (iv) a combined treatment of O3/1 g/L TiO2/0.04 mM H2O2. It was observed that the peroxone system achieved the highest levels of inactivation of E. coli, around 6.80 log after 10 min of contact time. Catalytic ozonation also obtained high levels of inactivation in a short period of time, reaching 6.22 log in 10 min. Both treatments, the peroxone system (O3/H2O2) and catalytic ozonation (O3/TiO2), produced a higher inactivation rate of E. coli than ozonation (4.97 log after 10 min). While the combination of ozone with hydrogen peroxide or titanium dioxide thus produces an increase in the inactivation yield of E. coli regarding ozonation, the O3/TiO2/H2O2 combination did not enhance the inactivation results. The fitting of experimental values to the corresponding equations through non-linear regression techniques was carried out with Microsoft® Excel GInaFiT software. The inactivation results of E. coli did not respond to linear functions, and it was necessary to use mathematical models able to describe certain deviations in the bacterial inactivation processes. In this case, the inactivation results fit with mathematical models based on the hypothesis that the bacteria population is divided into two different subgroups with different degrees of resistance to treatments, for instance biphasic and biphasic with shoulder models. Graphical abstract ᅟ.

  11. Molecular modeling of layered double hydroxide intercalated with benzoate, modeling and experiment.

    PubMed

    Kovár, Petr; Pospísil, M; Nocchetti, M; Capková, P; Melánová, Klára

    2007-08-01

    The structure of Zn4Al2 Layered Double Hydroxide intercalated with benzencarboxylate (C6H5COO-) was solved using molecular modeling combined with experiment (X-ray powder diffraction, IR spectroscopy, TG measurements). Molecular modeling revealed the arrangement of guest molecules, layer stacking, water content and water location in the interlayer space of the host structure. Molecular modeling using empirical force field was carried out in Cerius(2) modeling environment. Results of modeling were confronted with experiment that means comparing the calculated and measured diffraction pattern and comparing the calculated water content with the thermogravimetric value. Good agreement has been achieved between calculated and measured basal spacing: d(calc) = 15.3 A and d(exp) = 15.5 A. The number of water molecules per formula unit (6H2O per Zn4Al2(OH)12) obtained by modeling (i.e., corresponding to the energy minimum) agrees with the water content estimated by thermogravimetry. The long axis of guest molecules are almost perpendicular to the LDH layers, anchored to the host layers via COO- groups. Mutual orientation of benzoate ring planes in the interlayer space keeps the parquet arrangement. Water molecules are roughly arranged in planes adjacent to host layers together with COO- groups.

  12. The mirror neuron system: a fresh view.

    PubMed

    Casile, Antonino; Caggiano, Vittorio; Ferrari, Pier Francesco

    2011-10-01

    Mirror neurons are a class of visuomotor neurons in the monkey premotor and parietal cortices that discharge during the execution and observation of goal-directed motor acts. They are deemed to be at the basis of primates' social abilities. In this review, the authors provide a fresh view about two still open questions about mirror neurons. The first question is their possible functional role. By reviewing recent neurophysiological data, the authors suggest that mirror neurons might represent a flexible system that encodes observed actions in terms of several behaviorally relevant features. The second question concerns the possible developmental mechanisms responsible for their initial emergence. To provide a possible answer to question, the authors review two different aspects of sensorimotor development: facial and hand movements, respectively. The authors suggest that possibly two different "mirror" systems might underlie the development of action understanding and imitative abilities in the two cases. More specifically, a possibly prewired system already present at birth but shaped by the social environment might underlie the early development of facial imitative abilities. On the contrary, an experience-dependent system might subserve perception-action couplings in the case of hand movements. The development of this latter system might be critically dependent on the observation of own movements.

  13. Model slope infiltration experiments for shallow landslides early warning

    NASA Astrophysics Data System (ADS)

    Damiano, E.; Greco, R.; Guida, A.; Olivares, L.; Picarelli, L.

    2009-04-01

    simple empirical models [Versace et al., 2003] based on correlation between some features of rainfall records (cumulated height, duration, season etc.) and the correspondent observed landslides. Laboratory experiments on instrumented small scale slope models represent an effective way to provide data sets [Eckersley, 1990; Wang and Sassa, 2001] useful for building up more complex models of landslide triggering prediction. At the Geotechnical Laboratory of C.I.R.I.AM. an instrumented flume to investigate on the mechanics of landslides in unsaturated deposits of granular soils is available [Olivares et al. 2003; Damiano, 2004; Olivares et al., 2007]. In the flume a model slope is reconstituted by a moist-tamping technique and subjected to an artificial uniform rainfall since failure happens. The state of stress and strain of the slope is monitored during the entire test starting from the infiltration process since the early post-failure stage: the monitoring system is constituted by several mini-tensiometers placed at different locations and depths, to measure suction, mini-transducers to measure positive pore pressures, laser sensors, to measure settlements of the ground surface, and high definition video-cameras to obtain, through a software (PIV) appositely dedicated, the overall horizontal displacement field. Besides, TDR sensors, used with an innovative technique [Greco, 2006], allow to reconstruct the water content profile of soil along the entire thickness of the investigated deposit and to monitor its continuous changes during infiltration. In this paper a series of laboratory tests carried out on model slopes in granular pyroclastic soils taken in the mountainous area north-eastern of Napoli, are presented. The experimental results demonstrate the completeness of information provided by the various sensors installed. In particular, very useful information is given by the coupled measurements of soil water content by TDR and suction by tensiometers. Knowledge of

  14. Physiology of fresh-cut fruits and vegetables

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The idea to pre-process fruits and vegetables in the fresh state started with fresh-cut salads and now has expanded to fresh-cut fruits and other vegetables. The fresh-cut portion of the fresh produce industry includes fruits, vegetables, sprouts, mushrooms and even herbs that are cut, cored, sliced...

  15. Modeling of coherent ultrafast magneto-optical experiments: Light-induced molecular mean-field model

    SciTech Connect

    Hinschberger, Y.; Hervieux, P.-A.

    2015-12-28

    We present calculations which aim to describe coherent ultrafast magneto-optical effects observed in time-resolved pump-probe experiments. Our approach is based on a nonlinear semi-classical Drude-Voigt model and is used to interpret experiments performed on nickel ferromagnetic thin film. Within this framework, a phenomenological light-induced coherent molecular mean-field depending on the polarizations of the pump and probe pulses is proposed whose microscopic origin is related to a spin-orbit coupling involving the electron spins of the material sample and the electric field of the laser pulses. Theoretical predictions are compared to available experimental data. The model successfully reproduces the observed experimental trends and gives meaningful insight into the understanding of magneto-optical rotation behavior in the ultrafast regime. Theoretical predictions for further experimental studies are also proposed.

  16. Mesoscale Modeling During Mixed-Phase Arctic Cloud Experiment

    SciTech Connect

    Avramov, A.; Harringston, J.Y.; Verlinde, J.

    2005-03-18

    Mixed-phase arctic stratus clouds are the predominant cloud type in the Arctic (Curry et al. 2000) and through various feedback mechanisms exert a strong influence on the Arctic climate. Perhaps one of the most intriguing of their features is that they tend to have liquid tops that precipitate ice. Despite the fact that this situation is colloidally unstable, these cloud systems are quite long lived - from a few days to over a couple of weeks. It has been hypothesized that mixed-phase clouds are maintained through a balance between liquid water condensation resulting from the cloud-top radiative cooling and ice removal by precipitation (Pinto 1998; Harrington et al. 1999). In their modeling study Harrington et al. (1999) found that the maintenance of this balance depends strongly on the ambient concentration of ice forming nucleus (IFN). In a follow-up study, Jiang et al. (2002), using only 30% of IFN concentration predicted by Meyers et al. (1992) IFN parameterization were able to obtain results similar to the observations reported by Pinto (1998). The IFN concentration measurements collected during the Mixed-Phase Arctic Cloud Experiment (M-PACE), conducted in October 2004 over the North Slope of Alaska and the Beaufort Sea (Verlinde et al. 2005), also showed much lower values then those predicted (Prenne, pers. comm.) by currently accepted ice nucleation parameterizations (e.g. Meyers et al. 1992). The goal of this study is to use the extensive IFN data taken during M-PACE to examine what effects low IFN concentrations have on mesoscale cloud structure and coastal dynamics.

  17. Cohesive behavior of soft biological adhesives: experiments and modeling.

    PubMed

    Dastjerdi, A Khayer; Pagano, M; Kaartinen, M T; McKee, M D; Barthelat, F

    2012-09-01

    Extracellular proteins play a key role in generating and maintaining cohesion and adhesion in biological tissues. These "natural glues" are involved in vital biological processes such as blood clotting, wound healing and maintaining the structural integrity of tissues. Macromolecular assemblies of proteins can be functionally stabilized in a variety of ways in situ that include ionic interactions as well as covalent crosslinking to form protein networks that can extend both within and between tissues. Within tissues, myriad cohesive forces are required to preserve tissue integrity and function, as are additional appropriate adhesive forces at interfaces both within and between tissues of differing composition. While the mechanics of some key structural adhesive proteins have been characterized in tensile experiments at both the macroscopic and single protein levels, the fracture toughness of thin proteinaceous interfaces has never been directly measured. Here, we describe a novel and simple approach to measure the cohesive behavior and toughness of thin layers of proteinaceous adhesives. The test is based on the standard double-cantilever beam test used for engineering adhesives, which was adapted to take into account the high compliance of the interface compared with the beams. This new "rigid double-cantilever beam" method enables stable crack propagation through an interfacial protein layer, and provides a direct way to measure its full traction-separation curve. The method does not require any assumption of the shape of the cohesive law, and the results provide abundant information contributing to understanding the structural, chemical and molecular mechanisms acting in biological adhesion. As an example, results are presented using this method for thin films of fibrin-a protein involved in blood clotting and used clinically as a tissue bio-adhesive after surgery-with the effects of calcium and crosslinking by Factor XIII being examined. Finally, a simple model

  18. Earthquake nucleation mechanisms and periodic loading: Models, Experiments, and Observations

    NASA Astrophysics Data System (ADS)

    Dahmen, K.; Brinkman, B.; Tsekenis, G.; Ben-Zion, Y.; Uhl, J.

    2010-12-01

    The project has two main goals: (a) Improve the understanding of how earthquakes are nucleated ¬ with specific focus on seismic response to periodic stresses (such as tidal or seasonal variations) (b) Use the results of (a) to infer on the possible existence of precursory activity before large earthquakes. A number of mechanisms have been proposed for the nucleation of earthquakes, including frictional nucleation (Dieterich 1987) and fracture (Lockner 1999, Beeler 2003). We study the relation between the observed rates of triggered seismicity, the period and amplitude of cyclic loadings and whether the observed seismic activity in response to periodic stresses can be used to identify the correct nucleation mechanism (or combination of mechanisms). A generalized version of the Ben-Zion and Rice model for disordered fault zones and results from related recent studies on dislocation dynamics and magnetization avalanches in slowly magnetized materials are used in the analysis (Ben-Zion et al. 2010; Dahmen et al. 2009). The analysis makes predictions for the statistics of macroscopic failure events of sheared materials in the presence of added cyclic loading, as a function of the period, amplitude, and noise in the system. The employed tools include analytical methods from statistical physics, the theory of phase transitions, and numerical simulations. The results will be compared to laboratory experiments and observations. References: Beeler, N.M., D.A. Lockner (2003). Why earthquakes correlate weakly with the solid Earth tides: effects of periodic stress on the rate and probability of earthquake occurrence. J. Geophys. Res.-Solid Earth 108, 2391-2407. Ben-Zion, Y. (2008). Collective Behavior of Earthquakes and Faults: Continuum-Discrete Transitions, Evolutionary Changes and Corresponding Dynamic Regimes, Rev. Geophysics, 46, RG4006, doi:10.1029/2008RG000260. Ben-Zion, Y., Dahmen, K. A. and J. T. Uhl (2010). A unifying phase diagram for the dynamics of sheared solids

  19. A Model for Designing Adaptive Laboratory Evolution Experiments.

    PubMed

    LaCroix, Ryan A; Palsson, Bernhard O; Feist, Adam M

    2017-04-15

    The occurrence of mutations is a cornerstone of the evolutionary theory of adaptation, capitalizing on the rare chance that a mutation confers a fitness benefit. Natural selection is increasingly being leveraged in laboratory settings for industrial and basic science applications. Despite increasing deployment, there are no standardized procedures available for designing and performing adaptive laboratory evolution (ALE) experiments. Thus, there is a need to optimize the experimental design, specifically for determining when to consider an experiment complete and for balancing outcomes with available resources (i.e., laboratory supplies, personnel, and time). To design and to better understand ALE experiments, a simulator, ALEsim, was developed, validated, and applied to the optimization of ALE experiments. The effects of various passage sizes were experimentally determined and subsequently evaluated with ALEsim, to explain differences in experimental outcomes. Furthermore, a beneficial mutation rate of 10(-6.9) to 10(-8.4) mutations per cell division was derived. A retrospective analysis of ALE experiments revealed that passage sizes typically employed in serial passage batch culture ALE experiments led to inefficient production and fixation of beneficial mutations. ALEsim and the results described here will aid in the design of ALE experiments to fit the exact needs of a project while taking into account the resources required and will lower the barriers to entry for this experimental technique.IMPORTANCE ALE is a widely used scientific technique to increase scientific understanding, as well as to create industrially relevant organisms. The manner in which ALE experiments are conducted is highly manual and uniform, with little optimization for efficiency. Such inefficiencies result in suboptimal experiments that can take multiple months to complete. With the availability of automation and computer simulations, we can now perform these experiments in an optimized

  20. Classification of organic beef freshness using VNIR hyperspectral imaging.

    PubMed

    Crichton, Stuart O J; Kirchner, Sascha M; Porley, Victoria; Retz, Stefanie; von Gersdorff, Gardis; Hensel, Oliver; Weygandt, Martin; Sturm, Barbara

    2017-02-08

    Consumer trust in the food industry is heavily reliant upon accurate labelling of meat products. As such, methods, which can verify whether meat is correctly labelled are of great value to producers, retailers, and consumers. This paper illustrates two approaches to classify between, fresh and frozen thawed, and in a novel manner matured and matured frozen-thawed, as well as fresh and matured beef using the 500-1010nm waveband, captured using hyperspectral imaging, and CIELAB measurements. The results show successful classification based upon CIELAB between 1) fresh and frozen-thawed (CCR=0.93), and 2) fresh and matured (CCR=0.92). With successful classification between matured and matured frozen-thawed beef using the entire spectral range (CCR=1.00). The performance of reduced spectral models is also investigated. Overall it was found that CIELAB co-ordinates can be used for successful classification for all comparisons except between matured and matured frozen-thawed. Biochemical and physical changes of the meat are thoroughly discussed for each condition.

  1. Response Surface Model Building Using Orthogonal Arrays for Computer Experiments

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Braun, Robert D.; Moore, Arlene A.; Lepsch, Roger A.

    1997-01-01

    This study investigates response surface methods for computer experiments and discusses some of the approaches available. Orthogonal arrays constructed for computer experiments are studied and an example application to a technology selection and optimization study for a reusable launch vehicle is presented.

  2. Modeling Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team

    2013-10-01

    The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  3. Model-experiment interaction to improve representation of phosphorus limitation in land models

    NASA Astrophysics Data System (ADS)

    Norby, R. J.; Yang, X.; Cabugao, K. G. M.; Childs, J.; Gu, L.; Haworth, I.; Mayes, M. A.; Porter, W. S.; Walker, A. P.; Weston, D. J.; Wright, S. J.

    2015-12-01

    Carbon-nutrient interactions play important roles in regulating terrestrial carbon cycle responses to atmospheric and climatic change. None of the CMIP5 models has included routines to represent the phosphorus (P) cycle, although P is commonly considered to be the most limiting nutrient in highly productive, lowland tropical forests. Model simulations with the Community Land Model (CLM-CNP) show that inclusion of P coupling leads to a smaller CO2 fertilization effect and warming-induced CO2 release from tropical ecosystems, but there are important uncertainties in the P model, and improvements are limited by a dearth of data. Sensitivity analysis identifies the relative importance of P cycle parameters in determining P availability and P limitation, and thereby helps to define the critical measurements to make in field campaigns and manipulative experiments. To improve estimates of P supply, parameters that describe maximum amount of labile P in soil and sorption-desorption processes are necessary for modeling the amount of P available for plant uptake. Biochemical mineralization is poorly constrained in the model and will be improved through field observations that link root traits to mycorrhizal activity, phosphatase activity, and root depth distribution. Model representation of P demand by vegetation, which currently is set by fixed stoichiometry and allometric constants, requires a different set of data. Accurate carbon cycle modeling requires accurate parameterization of the photosynthetic machinery: Vc,max and Jmax. Relationships between the photosynthesis parameters and foliar nutrient (N and P) content are being developed, and by including analysis of covariation with other plant traits (e.g., specific leaf area, wood density), we can provide a basis for more dynamic, trait-enabled modeling. With this strong guidance from model sensitivity and uncertainty analysis, field studies are underway in Puerto Rico and Panama to collect model-relevant data on P

  4. Dispersibility of crude oil in fresh water.

    PubMed

    Wrenn, B A; Virkus, A; Mukherjee, B; Venosa, A D

    2009-06-01

    The effects of surfactant composition on the ability of chemical dispersants to disperse crude oil in fresh water were investigated. The objective of this research was to determine whether effective fresh water dispersants can be designed in case this technology is ever considered for use in fresh water environments. Previous studies on the chemical dispersion of crude oil in fresh water neither identified the dispersants that were investigated nor described the chemistry of the surfactants used. This information is necessary for developing a more fundamental understanding of chemical dispersion of crude oil at low salinity. Therefore, we evaluated the relationship between surfactant chemistry and dispersion effectiveness. We found that dispersants can be designed to drive an oil slick into the freshwater column with the same efficiency as in salt water as long as the hydrophilic-lipophilic balance is optimum.

  5. Using the Bifocal Modeling Framework to Resolve "Discrepant Events" Between Physical Experiments and Virtual Models in Biology

    NASA Astrophysics Data System (ADS)

    Blikstein, Paulo; Fuhrmann, Tamar; Salehi, Shima

    2016-08-01

    In this paper, we investigate an approach to supporting students' learning in science through a combination of physical experimentation and virtual modeling. We present a study that utilizes a scientific inquiry framework, which we call "bifocal modeling," to link student-designed experiments and computer models in real time. In this study, a group of high school students designed computer models of bacterial growth with reference to a simultaneous physical experiment they were conducting, and were able to validate the correctness of their model against the results of their experiment. Our findings suggest that as the students compared their virtual models with physical experiments, they encountered "discrepant events" that contradicted their existing conceptions and elicited a state of cognitive disequilibrium. This experience of conflict encouraged students to further examine their ideas and to seek more accurate explanations of the observed natural phenomena, improving the design of their computer models.

  6. Trading Freshness for Performance in Distributed Systems

    DTIC Science & Technology

    2014-12-01

    Trading Freshness for Performance in Distributed Systems James Cipar CMU-CS-14-144 December 2014 School of Computer Science Computer Science...2014 to 00-00-2014 4. TITLE AND SUBTITLE Trading Freshness for Performance in Distributed Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Carnegie

  7. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... expressly or implicitly refers to the food on labels or labeling, including use in a brand name and use as a..., Drug, and Cosmetic Act (the act). (a) The term “fresh,” when used on the label or in labeling of a food... fresh,” when used on the label or in labeling of a food, mean that the food was quickly frozen...

  8. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... expressly or implicitly refers to the food on labels or labeling, including use in a brand name and use as a..., Drug, and Cosmetic Act (the act). (a) The term “fresh,” when used on the label or in labeling of a food... fresh,” when used on the label or in labeling of a food, mean that the food was quickly frozen...

  9. 21 CFR 101.95 - “Fresh,” “freshly frozen,” “fresh frozen,” “frozen fresh.”

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... expressly or implicitly refers to the food on labels or labeling, including use in a brand name and use as a..., Drug, and Cosmetic Act (the act). (a) The term “fresh,” when used on the label or in labeling of a food... fresh,” when used on the label or in labeling of a food, mean that the food was quickly frozen...

  10. HCCI experiments with toluene reference fuels modeled by a semidetailed chemical kinetic model

    SciTech Connect

    Andrae, J.C.G.; Brinck, T.; Kalghatgi, G.T.

    2008-12-15

    A semidetailed mechanism (137 species and 633 reactions) and new experiments in a homogeneous charge compression ignition (HCCI) engine on the autoignition of toluene reference fuels are presented. Skeletal mechanisms for isooctane and n-heptane were added to a detailed toluene submechanism. The model shows generally good agreement with ignition delay times measured in a shock tube and a rapid compression machine and is sensitive to changes in temperature, pressure, and mixture strength. The addition of reactions involving the formation and destruction of benzylperoxide radical was crucial to modeling toluene shock tube data. Laminar burning velocities for benzene and toluene were well predicted by the model after some revision of the high-temperature chemistry. Moreover, laminar burning velocities of a real gasoline at 353 and 500 K could be predicted by the model using a toluene reference fuel as a surrogate. The model also captures the experimentally observed differences in combustion phasing of toluene/n-heptane mixtures, compared to a primary reference fuel of the same research octane number, in HCCI engines as the intake pressure and temperature are changed. For high intake pressures and low intake temperatures, a sensitivity analysis at the moment of maximum heat release rate shows that the consumption of phenoxy radicals is rate-limiting when a toluene/n-heptane fuel is used, which makes this fuel more resistant to autoignition than the primary reference fuel. Typical CPU times encountered in zero-dimensional calculations were on the order of seconds and minutes in laminar flame speed calculations. Cross reactions between benzylperoxy radicals and n-heptane improved the model predictions of shock tube experiments for {phi}=1.0 and temperatures lower than 800 K for an n-heptane/toluene fuel mixture, but cross reactions had no influence on HCCI simulations. (author)

  11. Pathogen Reduction of Fresh Whole Blood for Military and Civilian Use

    DTIC Science & Technology

    2010-04-01

    Pathogen Reduction of Fresh Whole Blood for Military and Civilian Use 24 - 4 RTO-MP-HFM-182 Figure 5: Growth rate of B. cereus in treated and... Bacillus cereus 2/8 Streptococcus pyogenes 4/10 Staphylococcus epidermidis 15/22 3.3 Parasite Reduction Several experiments to test the...RTO-MP-HFM-182 24 - 1 Pathogen Reduction of Fresh Whole Blood for Military and Civilian Use Raymond P. Goodrich, Ph.D., Heather L. Reddy

  12. Uncertainty Analysis with Site Specific Groundwater Models: Experiences and Observations

    SciTech Connect

    Brewer, K.

    2003-07-15

    Groundwater flow and transport predictions are a major component of remedial action evaluations for contaminated groundwater at the Savannah River Site. Because all groundwater modeling results are subject to uncertainty from various causes; quantification of the level of uncertainty in the modeling predictions is beneficial to project decision makers. Complex site-specific models present formidable challenges for implementing an uncertainty analysis.

  13. Optical and control modeling for adaptive beam-combining experiments

    SciTech Connect

    Gruetzner, J.K.; Tucker, S.D.; Neal, D.R.; Bentley, A.E.; Simmons-Potter, K.

    1995-08-01

    The development of modeling algorithms for adaptive optics systems is important for evaluating both performance and design parameters prior to system construction. Two of the most critical subsystems to be modeled are the binary optic design and the adaptive control system. Since these two are intimately related, it is beneficial to model them simultaneously. Optic modeling techniques have some significant limitations. Diffraction effects directly limit the utility of geometrical ray-tracing models, and transform techniques such as the fast fourier transform can be both cumbersome and memory intensive. The authors have developed a hybrid system incorporating elements of both ray-tracing and fourier transform techniques. In this paper they present an analytical model of wavefront propagation through a binary optic lens system developed and implemented at Sandia. This model is unique in that it solves the transfer function for each portion of a diffractive optic analytically. The overall performance is obtained by a linear superposition of each result. The model has been successfully used in the design of a wide range of binary optics, including an adaptive optic for a beam combining system consisting of an array of rectangular mirrors, each controllable in tip/tilt and piston. Wavefront sensing and the control models for a beam combining system have been integrated and used to predict overall systems performance. Applicability of the model for design purposes is demonstrated with several lens designs through a comparison of model predictions with actual adaptive optics results.

  14. Fresh Water Content Variability in the Arctic Ocean

    NASA Technical Reports Server (NTRS)

    Hakkinen, Sirpa; Proshutinsky, Andrey

    2003-01-01

    Arctic Ocean model simulations have revealed that the Arctic Ocean has a basin wide oscillation with cyclonic and anticyclonic circulation anomalies (Arctic Ocean Oscillation; AOO) which has a prominent decadal variability. This study explores how the simulated AOO affects the Arctic Ocean stratification and its relationship to the sea ice cover variations. The simulation uses the Princeton Ocean Model coupled to sea ice. The surface forcing is based on NCEP-NCAR Reanalysis and its climatology, of which the latter is used to force the model spin-up phase. Our focus is to investigate the competition between ocean dynamics and ice formation/melt on the Arctic basin-wide fresh water balance. We find that changes in the Atlantic water inflow can explain almost all of the simulated fresh water anomalies in the main Arctic basin. The Atlantic water inflow anomalies are an essential part of AOO, which is the wind driven barotropic response to the Arctic Oscillation (AO). The baroclinic response to AO, such as Ekman pumping in the Beaufort Gyre, and ice meldfreeze anomalies in response to AO are less significant considering the whole Arctic fresh water balance.

  15. Longitudinal Mode Aeroengine Combustion Instability: Model and Experiment

    NASA Technical Reports Server (NTRS)

    Cohen, J. M.; Hibshman, J. R.; Proscia, W.; Rosfjord, T. J.; Wake, B. E.; McVey, J. B.; Lovett, J.; Ondas, M.; DeLaat, J.; Breisacher, K.

    2001-01-01

    Combustion instabilities in gas turbine engines are most frequently encountered during the late phases of engine development, at which point they are difficult and expensive to fix. The ability to replicate an engine-traceable combustion instability in a laboratory-scale experiment offers the opportunity to economically diagnose the problem more completely (to determine the root cause), and to investigate solutions to the problem, such as active control. The development and validation of active combustion instability control requires that the casual dynamic processes be reproduced in experimental test facilities which can be used as a test bed for control system evaluation. This paper discusses the process through which a laboratory-scale experiment and be designed to replicate an instability observed in a developmental engine. The scaling process used physically-based analyses to preserve the relevant geometric, acoustic, and thermo-fluid features, ensuring that results achieved in the single-nozzle experiment will be scalable to the engine.

  16. Subsurface imaging, TAIGER experiments and tectonic models of Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Francis T.; Kuo-Chen, H.; McIntosh, K. D.

    2014-08-01

    The seismicity, deformation rates and associated erosion in the Taiwan region clearly demonstrate that plate tectonic and orogenic activities are at a high level. Major geologic units can be neatly placed in the plate tectonic context, albeit critical mapping in specific areas is still needed, but the key processes involved in the building of the island remain under discussion. Of the two plates in the vicinity of Taiwan, the Philippine Sea Plate (PSP) is oceanic in its origin while the Eurasian Plate (EUP) is comprised partly of the Asian continental lithosphere and partly of the transitional lithosphere of the South China Sea basin. It is unanimously agreed that the collision of PSP and EU is the cause of the Taiwan orogeny, but several models of the underlying geological processes have been proposed, each with its own evolutionary history and implied subsurface tectonics. TAIGER (TAiwan Integrated GEodynamics Research) crustal- and mantle-imaging experiments recently made possible a new round of testing and elucidation. The new seismic tomography resolved structures under and offshore of Taiwan to a depth of about 200 km. In the upper mantle, the steeply east-dipping high velocity anomalies from southern to central Taiwan are clear, but only the extreme southern part is associated with seismicity; toward the north the seismicity disappears. The crustal root under the Central Range is strongly asymmetrical; using 7.5 km/s as a guide, the steep west-dipping face on the east stands in sharp contrast to a gradual east-dipping face on the west. A smaller root exists under the Coastal Range or slightly to the east of it. Between these two roots lies a well delineated high velocity rise spanning the length from Hualien to Taitung. The 3-D variations in crustal and mantle structures parallel to the trend of the island are closely correlated with the plate tectonic framework of Taiwan. The crust is thickest in the central Taiwan collision zone, and although it thins

  17. Vacuum vessel eddy current modeling for TFTR adiabatic compression experiments

    SciTech Connect

    DeLucia, J.; Bell, M.; Wong, K.L.

    1985-07-01

    A relatively simple current filament model of the TFTR vacuum vessel is described. It is used to estimate the three-dimensional structure of magnetic field perturbations in the vicinity of the plasma that arise from vacuum vessel eddy currents induced during adiabatic compression. Eddy currents are calculated self-consistently with the plasma motion. The Shafranov formula and adiabatic scaling laws are used to model the plasma. Although the specific application is to TFTR, the present model is of generation applicability.

  18. Establishing the Global Fresh Water Sensor Web

    NASA Technical Reports Server (NTRS)

    Hildebrand, Peter H.

    2005-01-01

    This paper presents an approach to measuring the major components of the water cycle from space using the concept of a sensor-web of satellites that are linked to a data assimilation system. This topic is of increasing importance, due to the need for fresh water to support the growing human population, coupled with climate variability and change. The net effect is that water is an increasingly valuable commodity. The distribution of fresh water is highly uneven over the Earth, with both strong latitudinal distributions due to the atmospheric general circulation, and even larger variability due to landforms and the interaction of land with global weather systems. The annual global fresh water budget is largely a balance between evaporation, atmospheric transport, precipitation and runoff. Although the available volume of fresh water on land is small, the short residence time of water in these fresh water reservoirs causes the flux of fresh water - through evaporation, atmospheric transport, precipitation and runoff - to be large. With a total atmospheric water store of approx. 13 x 10(exp 12)cu m, and an annual flux of approx. 460 x 10(exp 12)cu m/y, the mean atmospheric residence time of water is approx. 10 days. River residence times are similar, biological are approx. 1 week, soil moisture is approx. 2 months, and lakes and aquifers are highly variable, extending from weeks to years. The hypothesized potential for redistribution and acceleration of the global hydrological cycle is therefore of concern. This hypothesized speed-up - thought to be associated with global warming - adds to the pressure placed upon water resources by the burgeoning human population, the variability of weather and climate, and concerns about anthropogenic impacts on global fresh water availability.

  19. A Model for an Introductory Undergraduate Research Experience

    ERIC Educational Resources Information Center

    Canaria, Jeffrey A.; Schoffstall, Allen M.; Weiss, David J.; Henry, Renee M.; Braun-Sand, Sonja B.

    2012-01-01

    An introductory, multidisciplinary lecture-laboratory course linked with a summer research experience has been established to provide undergraduate biology and chemistry majors with the skills needed to be successful in the research laboratory. This three-credit hour course was focused on laboratory skills and was designed to reinforce and develop…

  20. Composing a model of outer space through virtual experiences

    NASA Astrophysics Data System (ADS)

    Aguilera, Julieta C.

    2015-03-01

    This paper frames issues of trans-scalar perception in visualization, reflecting on the limits of the human senses, particularly those which are related to space, and describe planetarium shows, presentations, and exhibit experiences of spatial immersion and interaction in real time.

  1. The Chinese Experience: From Yellow Peril to Model Minority

    ERIC Educational Resources Information Center

    Wong, Legan

    1976-01-01

    Argues that for too long the experiences of the Chinese population in America have been either shrouded in misconception or totally ignored, and that this country must recognize and deal with the issues affecting this community. Learning about Chinese Americans will allow us to reexamine governmental policies towards racial and ethnic groups and…

  2. Applying the Job Characteristics Model to the College Education Experience

    ERIC Educational Resources Information Center

    Kass, Steven J.; Vodanovich, Stephen J.; Khosravi, Jasmine Y.

    2011-01-01

    Boredom is one of the most common complaints among university students, with studies suggesting its link to poor grades, drop out, and behavioral problems. Principles borrowed from industrial-organizational psychology may help prevent boredom and enrich the classroom experience. In the current study, we applied the core dimensions of the job…

  3. Experiments and Modeling of Evaporating/Condensing Menisci

    NASA Technical Reports Server (NTRS)

    Plawsky, Joel; Wayner, Peter C., Jr.

    2013-01-01

    Discuss the Constrained Vapor Bubble (CVB) experiment and how it aims to achieve a better understanding of the physics of evaporation and condensation and how they affect cooling processes in microgravity using a remotely controlled microscope and a small cooling device.

  4. Simulation of salt water-fresh water interface motion

    SciTech Connect

    Ferrer Polo, J.; Ramos, F.J.

    1983-02-01

    A mathematic model is presented which describes the salt water-fresh water motion with a sharp interface, assuming the validity of the Dupuit approximation. This model is used as a base to derive a numeric-model (finite difference method) which is unconditionally convergent and stable. A method for solving the equations is selected together with a convergence accelerating procedure. The treatment of the boundary conditions in the interface is discussed, and a general and automatic solution for that problem is presented. Several tests with analytic solutions have been performed with good results. 13 references.

  5. Working Towards Explicit Modelling: Experiences of a New Teacher Educator

    ERIC Educational Resources Information Center

    White, Elizabeth

    2011-01-01

    As a new teacher educator of beginner teachers on the Graduate Teacher Programme in a large School of Education in a UK university, I have reflected on how I have been able to develop the effectiveness of modelling good professional practice to student-teachers. In this paper I will present ways in which I have made modelling more explicit, how…

  6. Using the Cultural Models Approach in Studying the Multicultural Experience.

    ERIC Educational Resources Information Center

    Koiva, Enn O., Ed.

    The paper describes how social studies classroom teachers can use the cultural models approach to help high school students understand multicultural societies. The multicultural models approach is based on identification of values which are common to all cultures and on the interchangeability and transferability of these values (universal…

  7. GALEN's model of parts and wholes: experience and comparisons.

    PubMed Central

    Rogers, J.; Rector, A.

    2000-01-01

    Part-whole relations play a critical role in the OpenGALEN Common Reference Model. We describe how particular characteristics of the underlying formalism have influenced GALEN's view on partonomy, and in more detail discuss how specific modelling issues have driven development of an extended set of partitive semantic links. PMID:11079977

  8. A controlled experiment in ground water flow model calibration

    USGS Publications Warehouse

    Hill, M.C.; Cooley, R.L.; Pollock, D.W.

    1998-01-01

    Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic

  9. Experiments and modeling of flow processes in freshwater lenses in layered island aquifers: Analysis of age stratification, travel times and interface propagation

    NASA Astrophysics Data System (ADS)

    Stoeckl, Leonard; Houben, Georg J.; Dose, Eduardo J.

    2015-10-01

    The management of fresh groundwater resources and the delineation of protection zones on islands require a thorough understanding of flow processes within freshwater lenses. Previous studies of freshwater lenses have mainly focused on interface geometries, disregarding flow patterns within the lens. In this study we use physical, analytical and numerical modeling to investigate the influence of vertically and horizontally layered aquifers and variations in recharge on the age stratification, travel times and transient development of freshwater lenses. Four generalized settings were examined in detail on a laboratory-scale. The experiments show significant deviations from homogeneous models. The case of a high permeability layer underlying a layer of lower permeability shows the strongest deviations in the processes investigated here. Water in the more permeable lower layer overtakes water flowing only through the upper layer, causing a bimodal distribution of travel times and a vertical repeating of the age stratification near the coast. The effects of heterogeneities revealed by the physical model experiments are expected to also occur on real islands and have thus to be considered when developing conceptual models for the management of such freshwater lenses.

  10. Pliocene Model Intercomparison Project (PlioMIP): Experimental Design and Boundary Conditions (Experiment 2)

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  11. Pliocene Model Intercomparison Project (PlioMIP): experimental design and boundary conditions (Experiment 2)

    USGS Publications Warehouse

    Haywood, A.M.; Dowsett, H.J.; Robinson, M.M.; Stoll, D.K.; Dolan, A.M.; Lunt, D.J.; Otto-Bliesner, B.; Chandler, M.A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere-only climate models. The second (Experiment 2) utilises fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  12. Quantitative Modeling of Entangled Polymer Rheology: Experiments, Tube Models and Slip-Link Simulations

    NASA Astrophysics Data System (ADS)

    Desai, Priyanka Subhash

    Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd

  13. Looking beyond Lewis Structures: A General Chemistry Molecular Modeling Experiment Focusing on Physical Properties and Geometry

    ERIC Educational Resources Information Center

    Linenberger, Kimberly J.; Cole, Renee S.; Sarkar, Somnath

    2011-01-01

    We present a guided-inquiry experiment using Spartan Student Version, ready to be adapted and implemented into a general chemistry laboratory course. The experiment provides students an experience with Spartan Molecular Modeling software while discovering the relationships between the structure and properties of molecules. Topics discussed within…

  14. Fresh groundwater resources in a large sand replenishment

    NASA Astrophysics Data System (ADS)

    Huizer, Sebastian; Oude Essink, Gualbert H. P.; Bierkens, Marc F. P.

    2016-08-01

    The anticipation of sea-level rise and increases in extreme weather conditions has led to the initiation of an innovative coastal management project called the Sand Engine. In this pilot project a large volume of sand (21.5 million m3) - also called sand replenishment or nourishment - was placed on the Dutch coast. The intention is that the sand is redistributed by wind, current, and tide, reinforcing local coastal defence structures and leading to a unique, dynamic environment. In this study we investigated the potential effect of the long-term morphological evolution of the large sand replenishment and climate change on fresh groundwater resources. The potential effects on the local groundwater system were quantified with a calibrated three-dimensional (3-D) groundwater model, in which both variable-density groundwater flow and salt transport were simulated. Model simulations showed that the long-term morphological evolution of the Sand Engine results in a substantial growth of fresh groundwater resources, in all adopted climate change scenarios. Thus, the application of a local sand replenishment could provide coastal areas the opportunity to combine coastal protection with an increase of the local fresh groundwater availability.

  15. CELSS experiment model and design concept of gas recycle system

    NASA Technical Reports Server (NTRS)

    Nitta, K.; Oguchi, M.; Kanda, S.

    1986-01-01

    In order to prolong the duration of manned missions around the Earth and to expand the human existing region from the Earth to other planets such as a Lunar Base or a manned Mars flight mission, the controlled ecological life support system (CELSS) becomes an essential factor of the future technology to be developed through utilization of space station. The preliminary system engineering and integration efforts regarding CELSS have been carried out by the Japanese CELSS concept study group for clarifying the feasibility of hardware development for Space station experiments and for getting the time phased mission sets after FY 1992. The results of these studies are briefly summarized and the design and utilization methods of a Gas Recycle System for CELSS experiments are discussed.

  16. Medium term hurricane catastrophe models: a validation experiment

    NASA Astrophysics Data System (ADS)

    Bonazzi, Alessandro; Turner, Jessica; Dobbin, Alison; Wilson, Paul; Mitas, Christos; Bellone, Enrica

    2013-04-01

    Climate variability is a major source of uncertainty for the insurance industry underwriting hurricane risk. Catastrophe models provide their users with a stochastic set of events that expands the scope of the historical catalogue by including synthetic events that are likely to happen in a defined time-frame. The use of these catastrophe models is widespread in the insurance industry but it is only in recent years that climate variability has been explicitly accounted for. In the insurance parlance "medium term catastrophe model" refers to products that provide an adjusted view of risk that is meant to represent hurricane activity on a 1 to 5 year horizon, as opposed to long term models that integrate across the climate variability of the longest available time series of observations. In this presentation we discuss how a simple reinsurance program can be used to assess the value of medium term catastrophe models. We elaborate on similar concepts as discussed in "Potential Economic Value of Seasonal Hurricane Forecasts" by Emanuel et al. (2012, WCAS) and provide an example based on 24 years of historical data of the Chicago Mercantile Hurricane Index (CHI), an insured loss proxy. Profit and loss volatility of a hypothetical primary insurer are used to score medium term models versus their long term counterpart. Results show that medium term catastrophe models could help a hypothetical primary insurer to improve their financial resiliency to varying climate conditions.

  17. Preservation technologies for fresh meat - a review.

    PubMed

    Zhou, G H; Xu, X L; Liu, Y

    2010-09-01

    Fresh meat is a highly perishable product due to its biological composition. Many interrelated factors influence the shelf life and freshness of meat such as holding temperature, atmospheric oxygen (O(2)), endogenous enzymes, moisture, light and most importantly, micro-organisms. With the increased demand for high quality, convenience, safety, fresh appearance and an extended shelf life in fresh meat products, alternative non-thermal preservation technologies such as high hydrostatic pressure, superchilling, natural biopreservatives and active packaging have been proposed and investigated. Whilst some of these technologies are efficient at inactivating the micro-organisms most commonly related to food-borne diseases, they are not effective against spores. To increase their efficacy against vegetative cells, a combination of several preservation technologies under the so-called hurdle concept has also been investigated. The objective of this review is to describe current methods and developing technologies for preserving fresh meat. The benefits of some new technologies and their industrial limitations is presented and discussed.

  18. Prediction of tomato freshness using infrared thermal imaging and transient step heating

    NASA Astrophysics Data System (ADS)

    Xie, Jing; Hsieh, Sheng-Jen; Tan, Zuojun; Wang, Hongjin; Zhang, Jian

    2016-05-01

    Tomatoes are the world's 8th most valuable agricultural product, valued at $58 billion dollars annually. Nondestructive testing and inspection of tomatoes is challenging and multi-faceted. Optical imaging is used for quality grading and ripeness. Spectral and hyperspectral imaging are used to detect surface detects and cuticle cracks. Infrared thermography has been used to distinguish between different stages of maturity. However, determining the freshness of tomatoes is still an open problem. For this research, infrared thermography was used for freshness prediction. Infrared images were captured at a rate of 1 frame per second during heating (0 to 40 seconds) and cooling (0 to 160 seconds). The absolute temperatures of the acquired images were plotted. Regions with higher temperature differences between fresh and less fresh (rotten within three days) tomatoes of approximately uniform size and shape were used as the input nodes in a three-layer artificial neural network (ANN) model. Two-thirds of the data were used for training and one-third was used for testing. Results suggest that by using infrared imaging data as input to an ANN model, tomato freshness can be predicted with 90% accuracy. T-tests and F-tests were conducted based on absolute temperature over time. The results suggest that there is a mean temperature difference between fresh and less fresh tomatoes (α = 0.05). However, there is no statistical difference in terms of temperature variation, which suggests a water concentration difference.

  19. An Acoustic Demonstration Model for CW and Pulsed Spectrosocopy Experiments

    NASA Astrophysics Data System (ADS)

    Starck, Torben; Mäder, Heinrich; Trueman, Trevor; Jäger, Wolfgang

    2009-06-01

    High school and undergraduate students have often difficulties if new concepts are introduced in their physics or chemistry lectures. Lecture demonstrations and references to more familiar analogues can be of great help to the students in such situations. We have developed an experimental setup to demonstrate the principles of cw absorption and pulsed excitation - emission spectroscopies, using acoustical analogues. Our radiation source is a speaker and the detector is a microphone, both controlled by a computer sound card. The acoustical setup is housed in a plexiglas box, which serves as a resonator. It turns out that beer glasses are suitable samples; this also helps to keep the students interested! The instrument is controlled by a LabView program. In a cw experiment, the sound frequency is swept through a certain frequency range and the microphone response is recorded simultaneously as function of frequency. A background signal without sample is recorded, and background subtraction yields the beer glass spectrum. In a pulsed experiment, a short sound pulse is generated and the microphone is used to record the resulting emission signal of the beer glass. A Fourier transformation of the time domain signal gives then the spectrum. We will discuss the experimental setup and show videos of the experiments.

  20. Experiments in Error Propagation within Hierarchal Combat Models

    DTIC Science & Technology

    2015-09-01

    the mean can affect campaign model results. A mission model of a one-on-one submarine battle is developed to determine the mean time to kill (MTTK...results. A mission model of a one-on-one submarine battle is developed to determine the mean time to kill (MTTK) for the belligerents. The MTTK is...amount of time a Blue submarine requires to kill a Red submarine, and Red MTTK denotes the average amount of time required for a Red submarine to

  1. Consumer's Fresh Produce Food Safety Practices: Outcomes of a Fresh Produce Safety Education Program

    ERIC Educational Resources Information Center

    Scott, Amanda R.; Pope, Paul E.; Thompson, Britta M.

    2009-01-01

    The Centers for Disease Control and Prevention estimate that there are 76 million cases of foodborne disease annually. Foodborne disease is usually associated with beef, poultry, and seafood. However, there is an increasing number of foodborne disease cases related to fresh produce. Consumers may not associate fresh produce with foodborne disease…

  2. Decontamination of fresh and fresh-cut fruits and vegetables with cold plasma technology

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Contamination of fresh and fresh-cut fruits and vegetables by foodborne pathogens has prompted research into novel interventions. Cold plasma is a nonthermal food processing technology which uses energetic, reactive gases to inactivate contaminating microbes. This flexible sanitizing method uses ele...

  3. A fresh fruit and vegetable program improves high school students' consumption of fresh produce

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Low fruit and vegetable intake may be associated with overweight. The United States Department of Agriculture implemented the Fresh Fruit and Vegetable Program in 2006-2007. One Houston-area high school was selected and received funding to provide baskets of fresh fruits and vegetables daily for eac...

  4. A portable device for rapid nondestructive detection of fresh meat quality

    NASA Astrophysics Data System (ADS)

    Lin, Wan; Peng, Yankun

    2014-05-01

    Quality attributes of fresh meat influence nutritional value and consumers' purchasing power. In order to meet the demand of inspection department for portable device, a rapid and nondestructive detection device for fresh meat quality based on ARM (Advanced RISC Machines) processor and VIS/NIR technology was designed. Working principal, hardware composition, software system and functional test were introduced. Hardware system consisted of ARM processing unit, light source unit, detection probe unit, spectral data acquisition unit, LCD (Liquid Crystal Display) touch screen display unit, power unit and the cooling unit. Linux operating system and quality parameters acquisition processing application were designed. This system has realized collecting spectral signal, storing, displaying and processing as integration with the weight of 3.5 kg. 40 pieces of beef were used in experiment to validate the stability and reliability. The results indicated that prediction model developed using PLSR method using SNV as pre-processing method had good performance, with the correlation coefficient of 0.90 and root mean square error of 1.56 for validation set for L*, 0.95 and 1.74 for a*,0.94 and 0.59 for b*, 0.88 and 0.13 for pH, 0.79 and 12.46 for tenderness, 0.89 and 0.91 for water content, respectively. The experimental result shows that this device can be a useful tool for detecting quality of meat.

  5. "Comments on Slavin": Through the Looking Glass--Experiments, Quasi-Experiments, and the Medical Model

    ERIC Educational Resources Information Center

    Sloane, Finbarr

    2008-01-01

    Slavin (2008) has called for changing the criteria used for the inclusion of basic research in national research synthesis clearinghouses. The author of this article examines a number of the assumptions made by Slavin, provides critique with alternatives, and asks what it means to fully implement the medical model in educational settings.…

  6. A business process modeling experience in a complex information system re-engineering.

    PubMed

    Bernonville, Stéphanie; Vantourout, Corinne; Fendeler, Geneviève; Beuscart, Régis

    2013-01-01

    This article aims to share a business process modeling experience in a re-engineering project of a medical records department in a 2,965-bed hospital. It presents the modeling strategy, an extract of the results and the feedback experience.

  7. Edible coatings for fresh-cut fruits.

    PubMed

    Olivas, G I; Barbosa-Cánovas, G V

    2005-01-01

    The production of fresh-cut fruits is increasingly becoming an important task as consumers are more aware of the importance of healthy eating habits, and have less time for food preparation. A fresh-cut fruit is a fruit that has been physically altered from its original state (trimmed, peeled, washed and/or cut), but remains in a fresh state. Unfortunately since fruits have living tissue, they undergo enzymatic browning, texture decay, microbial contamination, and undesirable volatile production, highly reducing their shelf life if they are in any way wounded. Edible coatings can be used to help in the preservation of minimally processed fruits, providing a partial barrier to moisture, oxygen and carbon dioxide, improving mechanical handling properties, carrying additives, avoiding volatiles loss, and even contributing to the production of aroma volatiles.

  8. Particular applications of food irradiation fresh produce

    NASA Astrophysics Data System (ADS)

    Prakash, Anuradha

    2016-12-01

    On fresh fruits and vegetables, irradiation at low and medium dose levels can effectively reduce microbial counts which can enhance safety, inhibit sprouting to extend shelf-life, and eliminate or sterilize insect pests which can serve to facilitate trade between countries. At the dose levels used for these purposes, the impact on quality is negligible. Despite the fact that regulations in many countries allow the use of irradiation for fresh produce, the technology remains under-utilized, even in the light of an increase in produce related disease outbreaks and the economic benefits of extended shelf life and reduced food waste. Putative concerns about consumer acceptance particularly for produce that is labeled as irradiated have deterred many companies from using irradiation and retailers to carry irradiated produce. This section highlights the commercial use of irradiation for fresh produce, other than phytosanitary irradiation which is covered in supplementary sections.

  9. Wave chaotic experiments and models for complicated wave scattering systems

    NASA Astrophysics Data System (ADS)

    Yeh, Jen-Hao

    Wave scattering in a complicated environment is a common challenge in many engineering fields because the complexity makes exact solutions impractical to find, and the sensitivity to detail in the short-wavelength limit makes a numerical solution relevant only to a specific realization. On the other hand, wave chaos offers a statistical approach to understand the properties of complicated wave systems through the use of random matrix theory (RMT). A bridge between the theory and practical applications is the random coupling model (RCM) which connects the universal features predicted by RMT and the specific details of a real wave scattering system. The RCM gives a complete model for many wave properties and is beneficial for many physical and engineering fields that involve complicated wave scattering systems. One major contribution of this dissertation is that I have utilized three microwave systems to thoroughly test the RCM in complicated wave systems with varied loss, including a cryogenic system with a superconducting microwave cavity for testing the extremely-low-loss case. I have also experimentally tested an extension of the RCM that includes short-orbit corrections. Another novel result is development of a complete model based on the RCM for the fading phenomenon extensively studied in the wireless communication fields. This fading model encompasses the traditional fading models as its high-loss limit case and further predicts the fading statistics in the low-loss limit. This model provides the first physical explanation for the fitting parameters used in fading models. I have also applied the RCM to additional experimental wave properties of a complicated wave system, such as the impedance matrix, the scattering matrix, the variance ratio, and the thermopower. These predictions are significant for nuclear scattering, atomic physics, quantum transport in condensed matter systems, electromagnetics, acoustics, geophysics, etc.

  10. Mathematical modelling of microtumour infiltration based on in vitro experiments.

    PubMed

    Luján, Emmanuel; Guerra, Liliana N; Soba, Alejandro; Visacovsky, Nicolás; Gandía, Daniel; Calvo, Juan C; Suárez, Cecilia

    2016-08-08

    The present mathematical models of microtumours consider, in general, volumetric growth and spherical tumour invasion shapes. Nevertheless in many cases, such as in gliomas, a need for more accurate delineation of tumour infiltration areas in a patient-specific manner has arisen. The objective of this study was to build a mathematical model able to describe in a case-specific way as well as to predict in a probabilistic way the growth and the real invasion pattern of multicellular tumour spheroids (in vitro model of an avascular microtumour) immersed in a collagen matrix. The two-dimensional theoretical model was represented by a reaction-convection-diffusion equation that considers logistic proliferation, volumetric growth, a rim with proliferative cells at the tumour surface and invasion with diffusive and convective components. Population parameter values of the model were extracted from the experimental dataset and a shape function that describes the invasion area was derived from each experimental case by image processing. New possible and aleatory shape functions were generated by data mining and Monte Carlo tools by means of a satellite EGARCH model, which were fed with all the shape functions of the dataset. Then the main model is used in two different ways: to reproduce the growth and invasion of a given experimental tumour in a case-specific manner when fed with the corresponding shape function (descriptive simulations) or to generate new possible tumour cases that respond to the general population pattern when fed with an aleatory-generated shape function (predictive simulations). Both types of simulations are in good agreement with empirical data, as it was revealed by area quantification and Bland-Altman analysis. This kind of experimental-numerical interaction has wide application potential in designing new strategies able to predict as much as possible the invasive behaviour of a tumour based on its particular characteristics and microenvironment.

  11. 9 CFR 319.142 - Fresh beef sausage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Fresh beef sausage. 319.142 Section... Sausage § 319.142 Fresh beef sausage. “Fresh Beef Sausage” is sausage prepared with fresh beef or frozen beef, or both, but not including beef byproducts, and may contain Mechanically Separated (Species)...

  12. Inheritance of fresh-cut fruit quality attributes in Capsicum

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The fresh-cut fruit and vegetable industry has expanded rapidly during the past decade, due to freshness, convenience and the high nutrition that fresh-cut produce offers to consumers. The current report evaluates the inheritance of postharvest attributes that contribute to pepper fresh-cut product...

  13. Modified atmosphere packaging for fresh-cut fruits and vegetables

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The latest development in and different aspects of modified atmosphere packaging for fresh-cut fruits and vegetables are reviewed in the book. This book provides all readers, including fresh-cut academic researchers, fresh-cut R&D personnel, and fresh-cut processing engineers, with unique, essential...

  14. Interim Service ISDN Satellite (ISIS) network model for advanced satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.; Hager, E. Paul

    1991-01-01

    The Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) Network Model for Advanced Satellite Designs and Experiments describes a model suitable for discrete event simulations. A top-down model design uses the Advanced Communications Technology Satellite (ACTS) as its basis. The ISDN modeling abstractions are added to permit the determination and performance for the NASA Satellite Communications Research (SCAR) Program.

  15. Modeling of Spherical Torus Plasmas for Liquid Lithium Wall Experiments

    SciTech Connect

    R. Kaita; S. Jardin; B. Jones; C. Kessel; R. Majeski; J. Spaleta; R. Woolley; L. Zakharo; B. Nelson; M. Ulrickson

    2002-01-29

    Liquid metal walls have the potential to solve first-wall problems for fusion reactors, such as heat load and erosion of dry walls, neutron damage and activation, and tritium inventory and breeding. In the near term, such walls can serve as the basis for schemes to stabilize magnetohydrodynamic (MHD) modes. Furthermore, the low recycling characteristics of lithium walls can be used for particle control. Liquid lithium experiments have already begun in the Current Drive eXperiment-Upgrade (CDX-U). Plasmas limited with a toroidally localized limiter have been investigated, and experiments with a fully toroidal lithium limiter are in progress. A liquid surface module (LSM) has been proposed for the National Spherical Torus Experiment (NSTX). In this larger ST, plasma currents are in excess of 1 MA and a typical discharge radius is about 68 cm. The primary motivation for the LSM is particle control, and options for mounting it on the horizontal midplane or in the divertor region are under consideration. A key consideration is the magnitude of the eddy currents at the location of a liquid lithium surface. During plasma start up and disruptions, the force due to such currents and the magnetic field can force a conducting liquid off of the surface behind it. The Tokamak Simulation Code (TSC) has been used to estimate the magnitude of this effect. This program is a two dimensional, time dependent, free boundary simulation code that solves the MHD equations for an axisymmetric toroidal plasma. From calculations that match actual ST equilibria, the eddy current densities can be determined at the locations of the liquid lithium. Initial results have shown that the effects could be significant, and ways of explicitly treating toroidally local structures are under investigation.

  16. The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements

    SciTech Connect

    Anderson, K.K.

    1994-05-01

    Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

  17. Model Experiments with Slot Antenna Arrays for Imaging

    NASA Technical Reports Server (NTRS)

    Johansson, J. F.; Yngvesson, K. S.; Kollberg, E. L.

    1985-01-01

    A prototype imaging system at 31 GHz was developed, which employs a two-dimensional (5x5) array of tapered slot antennas, and integrated detector or mixer elements, in the focal plane of a prime-focus paraboloid reflector, with an f/D=1. The system can be scaled to shorter millimeter waves and submillimeter waves. The array spacing corresponds to a beam spacing of approximately one Rayleigh distance and a two-point resolution experiment showed that two point-sources at the Rayleigh distance are well resolved.

  18. Simmer analysis of prompt burst energetics experiments

    SciTech Connect

    Hitchcock, J.T.

    1982-03-01

    The Prompt Burst Energetics experiments are designed to measure the pressure behavior of fuel and coolant as working fluids during a hypothetical prompt burst disassembly in an LMFBR. The work presented in this report consists of a parametric study of PBE-5S, a fresh oxide fuel experiment, using SIMMER-II. The various pressure sources in the experiment are examined, and the dominant source identified as incondensable contaminant gasses in the fuel. The important modeling uncertainties and limitations of SIMMER-II as applied to these experiments are discussed.

  19. The fence experiment — a first evaluation of shelter models

    NASA Astrophysics Data System (ADS)

    Peña, Alfredo; Bechmann, Andreas; Conti, Davide; Angelou, Nikolas; Mann, Jakob

    2016-09-01

    We present a preliminary evaluation of shelter models of different degrees of complexity using full-scale lidar measurements of the shelter on a vertical plane behind and orthogonal to a fence. Model results accounting for the distribution of the relative wind direction within the observed direction interval are in better agreement with the observations than those that correspond to the simulation at the center of the direction interval, particularly in the far-wake region, for six vertical levels up to two fence heights. Generally, the CFD results are in better agreement with the observations than those from two engineering-like obstacle models but the latter two follow well the behavior of the observations in the far-wake region.

  20. Modeling phase behavior for quantifying micro-pervaporation experiments

    NASA Astrophysics Data System (ADS)

    Schindler, M.; Ajdari, A.

    2009-01-01

    We present a theoretical model for the evolution of mixture concentrations in a micro-pervaporation device, similar to those recently presented experimentally. The described device makes use of the pervaporation of water through a thin PDMS membrane to build up a solute concentration profile inside a long microfluidic channel. We simplify the evolution of this profile in binary mixtures to a one-dimensional model which comprises two concentration-dependent coefficients. The model then provides a link between directly accessible experimental observations, such as the widths of dense phases or their growth velocity, and the underlying chemical potentials and phenomenological coefficients. It shall thus be useful for quantifying the thermodynamic and dynamic properties of dilute and dense binary mixtures.

  1. Modeling phase behavior for quantifying micro-pervaporation experiments.

    PubMed

    Schindler, M; Ajdari, A

    2009-01-01

    We present a theoretical model for the evolution of mixture concentrations in a micro-pervaporation device, similar to those recently presented experimentally. The described device makes use of the pervaporation of water through a thin PDMS membrane to build up a solute concentration profile inside a long microfluidic channel. We simplify the evolution of this profile in binary mixtures to a one-dimensional model which comprises two concentration-dependent coefficients. The model then provides a link between directly accessible experimental observations, such as the widths of dense phases or their growth velocity, and the underlying chemical potentials and phenomenological coefficients. It shall thus be useful for quantifying the thermodynamic and dynamic properties of dilute and dense binary mixtures.

  2. Tidal deformation of planets: experience in experimental modeling.

    NASA Astrophysics Data System (ADS)

    Bobryakov, A. P.; Revuzhenko, A. F.; Shemyakin, E. I.

    1992-06-01

    Two types of apparatus are described for laboratory modeling of tidal deformation. Plane deformation occurs in the first, and the model of the body has the shape of an elliptical cylinder; in the second three-dimensional deformation occurs, and the model is spheroidal in shape. In both cases displacements simulating motion of the tidal wave are assigned on the boundary. A global mechanism of directed mass transfer has been discovered. It is connected with transformation of vertical displacements to horizontal ones. The internal particles describe almost closed trajectories in one complete rotation of the tidal wave, but do not return to their original position. Residual displacements accumulate with increasing number of cycles and lead to differential rotation of internal masses. Questions surrounding experimental measurement of energy dissipation and the role of an internal rigid core are investigated. The effect of directed transfer on the physical fields of planets is discussed.

  3. Laboratory Experiments Modelling Sediment Transport by River Plumes

    NASA Astrophysics Data System (ADS)

    Sutherland, Bruce; Gingras, Murray; Knudson, Calla; Steverango, Luke; Surma, Chris

    2016-11-01

    Through lock-release laboratory experiments, the transport of particles by hypopycnal (surface) currents is examined as they flow into a uniform-density and a two-layer ambient fluid. In most cases the tank is tilted so that the current flows over a slope representing an idealization of a sediment-bearing river flowing into the ocean and passing over the continental shelf. When passing into a uniform-density ambient, the hypopycnal current slows and stops as particles rain out, carrying some of the light interstitial fluid with them. Rather than settling on the bottom, in many cases the descending particles accumulate to form a hyperpycnal (turbidity) current that flows downslope. This current then slows and stops as particles both rain out to the bottom and also rise again to the surface, carried upward by the light interstitial fluid. For a hypopycnal current flowing into a two-layer fluid, the current slows as particles rain out and accumulate at the interface of the two-layer ambient. Eventually these particles penetrate through the interface and settle to the bottom with no apparent formation of a hyperpycnal current. Analyses are performed to characterize the speed of the currents and stopping distances as they depend upon experiment parameters. Natural Sciences and Engineering Research Council.

  4. Thermal conductivity of cast iron: Models and analysis of experiments

    NASA Astrophysics Data System (ADS)

    Helsing, Johan; Grimvall, Göran

    1991-08-01

    Cast iron can be viewed as a composite material. We use effective medium and other theories for the overall conductivity of a composite, expressed in the conductivities, the volume fractions, and the morphology of the constituent phases, to model the thermal conductivity of grey and white cast iron and some iron alloys. The electronic and the vibrational contributions to the conductivities of the microconstituents (alloyed ferrite, cementite, pearlite, graphite) are discussed, with consideration of the various scattering mechanisms. Our model gives a good account of measured thermal conductivities at 300 K. It is easily extended to describe the thermal conductivity of other materials characterized by having several constituent phases of varying chemical composition.

  5. Dose-response model for teratological experiments involving quantal responses

    SciTech Connect

    Rai, K.; Van Ryzin, J.

    1985-03-01

    This paper introduces a dose-response model for teratological quantal response data where the probability of response for an offspring from a female at a given dose varies with the litter size. The maximum likelihood estimators for the parameters of the model are given as the solution of a nonlinear iterative algorithm. Two methods of low-dose extrapolation are presented, one based on the litter size distribution and the other a conservative method. The resulting procedures are then applied to a teratological data set from the literature.

  6. Hot water, fresh beer, and salt

    NASA Astrophysics Data System (ADS)

    Crawford, Frank S.

    1990-11-01

    In the ``hot chocolate effect'' the best musical scales (those with the finest tone quality, largest range, and best tempo) are obtained by adding salt to a glass of hot water supersaturated with air. Good scales can also be obtained by adding salt to a glass of freshly opened beer (supersaturated with CO2) provided you first (a) get rid of much of the excess CO2 so as to produce smaller, hence slower, rising bubbles, and (b) get rid of the head of foam, which damps the standing wave and ruins the tone quality. Finally the old question, ``Do ionizing particles produce bubbles in fresh beer?'' is answered experimentally.

  7. Modelling Drug Administration Regimes for Asthma: A Romanian Experience

    ERIC Educational Resources Information Center

    Andras, Szilard; Szilagyi, Judit

    2010-01-01

    In this article, we present a modelling activity, which was a part of the project DQME II (Developing Quality in Mathematics Education, for more details see http://www.dqime.uni-dortmund.de) and some general observations regarding the maladjustments and rational errors arising in such type of activities.

  8. Model-Driven Design: Systematically Building Integrated Blended Learning Experiences

    ERIC Educational Resources Information Center

    Laster, Stephen

    2010-01-01

    Developing and delivering curricula that are integrated and that use blended learning techniques requires a highly orchestrated design. While institutions have demonstrated the ability to design complex curricula on an ad-hoc basis, these projects are generally successful at a great human and capital cost. Model-driven design provides a…

  9. Early Childhood Educators' Experience of an Alternative Physical Education Model

    ERIC Educational Resources Information Center

    Tsangaridou, Niki; Genethliou, Nicholas

    2016-01-01

    Alternative instructional and curricular models are regarded as more comprehensive and suitable approaches to providing quality physical education (Kulinna 2008; Lund and Tannehill 2010; McKenzie and Kahan 2008; Metzler 2011; Quay and Peters 2008). The purpose of this study was to describe the impact of the Early Steps Physical Education…

  10. Social Modeling Influences on Pain Experience and Behaviour.

    ERIC Educational Resources Information Center

    Craig, Kenneth D.

    The impact of exposure to social models displaying variably tolerant pain behaviour on observers' expressions of pain is examined. Findings indicate substantial effects on verbal reports of pain, avoidance behaviour, psychophysiological indices, power function parameters, and sensory decision theory indices. Discussion centers on how social models…

  11. Redesigning Supervision: Alternative Models for Student Teaching and Field Experiences

    ERIC Educational Resources Information Center

    Rodgers, Adrian; Jenkins, Deborah Bainer

    2010-01-01

    In "Redesigning Supervision", active professionals in teacher education and professional development share research-based, alternative models for restructuring the way pre-service teachers are supervised. The authors examine the methods currently used and discuss how teacher educators have striven to change or renew these procedures. They then…

  12. Partnering the University Field Experience Research Model with Action Research.

    ERIC Educational Resources Information Center

    Schnorr, Donna; Painter, Diane D.

    This paper presents a collaborative action research partnership model that involved participation by graduate school of education preservice students, school and university teachers, and administrators. An elementary teacher-research group investigated what would happen when fourth graders worked in teams to research and produce a multimedia…

  13. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    NASA Astrophysics Data System (ADS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-07-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.

  14. Tutoring and Multi-Agent Systems: Modeling from Experiences

    ERIC Educational Resources Information Center

    Bennane, Abdellah

    2010-01-01

    Tutoring systems become complex and are offering varieties of pedagogical software as course modules, exercises, simulators, systems online or offline, for single user or multi-user. This complexity motivates new forms and approaches to the design and the modelling. Studies and research in this field introduce emergent concepts that allow the…

  15. The South African Experience: Beyond the CIDA Model

    ERIC Educational Resources Information Center

    Bruton, John M.

    2008-01-01

    The Community and Individual Development Association (CIDA) City Campus is presented by Heaton as an innovative African alternative to traditional business education. However, he considers the model in isolation from the unique educational and economic circumstances of postapartheid South Africa. As a response, this article goes beyond the CIDA…

  16. Integrated modeling applications for tokamak experiments with OMFIT

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Smith, S. P.; Lao, L. L.; Izacard, O.; Ren, Q.; Park, J. M.; Candy, J.; Wang, Z.; Luna, C. J.; Izzo, V. A.; Grierson, B. A.; Snyder, P. B.; Holland, C.; Penna, J.; Lu, G.; Raum, P.; McCubbin, A.; Orlov, D. M.; Belli, E. A.; Ferraro, N. M.; Prater, R.; Osborne, T. H.; Turnbull, A. D.; Staebler, G. M.

    2015-08-01

    One modeling framework for integrated tasks (OMFIT) is a comprehensive integrated modeling framework which has been developed to enable physics codes to interact in complicated workflows, and support scientists at all stages of the modeling cycle. The OMFIT development follows a unique bottom-up approach, where the framework design and capabilities organically evolve to support progressive integration of the components that are required to accomplish physics goals of increasing complexity. OMFIT provides a workflow for easily generating full kinetic equilibrium reconstructions that are constrained by magnetic and motional Stark effect measurements, and kinetic profile information that includes fast-ion pressure modeled by a transport code. It was found that magnetic measurements can be used to quantify the amount of anomalous fast-ion diffusion that is present in DIII-D discharges, and provide an estimate that is consistent with what would be needed for transport simulations to match the measured neutron rates. OMFIT was used to streamline edge-stability analyses, and evaluate the effect of resonant magnetic perturbation (RMP) on the pedestal stability, which have been found to be consistent with the experimental observations. The development of a five-dimensional numerical fluid model for estimating the effects of the interaction between magnetohydrodynamic (MHD) and microturbulence, and its systematic verification against analytic models was also supported by the framework. OMFIT was used for optimizing an innovative high-harmonic fast wave system proposed for DIII-D. For a parallel refractive index {{n}\\parallel}>3 , the conditions for strong electron-Landau damping were found to be independent of launched {{n}\\parallel} and poloidal angle. OMFIT has been the platform of choice for developing a neural-network based approach to efficiently perform a non-linear multivariate regression of local transport fluxes as a function of local dimensionless parameters

  17. Little Earth Experiment: An instrument to model planetary cores

    NASA Astrophysics Data System (ADS)

    Aujogue, Kélig; Pothérat, Alban; Bates, Ian; Debray, François; Sreenivasan, Binod

    2016-08-01

    In this paper, we present a new experimental facility, Little Earth Experiment, designed to study the hydrodynamics of liquid planetary cores. The main novelty of this apparatus is that a transparent electrically conducting electrolyte is subject to extremely high magnetic fields (up to 10 T) to produce electromagnetic effects comparable to those produced by moderate magnetic fields in planetary cores. This technique makes it possible to visualise for the first time the coupling between the principal forces in a convection-driven dynamo by means of Particle Image Velocimetry (PIV) in a geometry relevant to planets. We first present the technology that enables us to generate these forces and implement PIV in a high magnetic field environment. We then show that the magnetic field drastically changes the structure of convective plumes in a configuration relevant to the tangent cylinder region of the Earth's core.

  18. International Outdoor Experiments and Models for Outdoor Radiological Dispersal Devices

    SciTech Connect

    Blumenthal, Daniel J.; Musolino, Stephen V.

    2016-05-01

    With the advent of nuclear reactors and the technology to produce radioactive materials in large quantities, concern arose about the use of radioactivity as a poison in warfare, and hence, consideration was given to defensive measures (Smyth 1945). Approximately forty years later, the interest in the environmental- and health effects caused by a deliberate dispersal was renewed, but this time, from the perspective of a malevolent act of radiological terrorism in an urban area. For many years there has been international collaboration in scientific research to understand the range of effects that might result from a device that could be constructed by a sub-national group. In this paper, scientists from government laboratories in Australia, Canada, the United Kingdom, and the United States collectively have conducted a myriad of experiments to understand and detail the phenomenology of an explosive radiological dispersal device.

  19. International Outdoor Experiments and Models for Outdoor Radiological Dispersal Devices

    DOE PAGES

    Blumenthal, Daniel J.; Musolino, Stephen V.

    2016-05-01

    With the advent of nuclear reactors and the technology to produce radioactive materials in large quantities, concern arose about the use of radioactivity as a poison in warfare, and hence, consideration was given to defensive measures (Smyth 1945). Approximately forty years later, the interest in the environmental- and health effects caused by a deliberate dispersal was renewed, but this time, from the perspective of a malevolent act of radiological terrorism in an urban area. For many years there has been international collaboration in scientific research to understand the range of effects that might result from a device that could bemore » constructed by a sub-national group. In this paper, scientists from government laboratories in Australia, Canada, the United Kingdom, and the United States collectively have conducted a myriad of experiments to understand and detail the phenomenology of an explosive radiological dispersal device.« less

  20. [Health promotion through physical activity: territorial models and experiences].

    PubMed

    Romano-Spica, V; Parlato, A; Palumbo, D; Lorenzo, E; Frangella, C; Montuori, E; Anastasi, D; Visciano, A; Liguori, G

    2008-01-01

    Scientific evidences support the preventive role of physical activity in relation to different multifactorial pathologies. Health's promotion through the spreading of lifestyles that encourage movement, does not represent just an action in contrast with "sedentary life" risk-factor, but also a priority for "quality" of life, with relevant economical and social benefits. WHO indicates physical activity as one of the priorities for an effective prevention. Besides, the EU supports the realization and the diffusion of some prevention-programs. Main pilot experiences developed in Italy and other countries are summarized. Attention is focused on the role of the competences and structures involved in an integrated approach based on availability of medical support, social services and local structures, considering recent developments in health prevention and promotion. In Italy and Europe, new opportunities to implement health promotion through physical activity are offered by the development of higher education in movement and sport sciences.

  1. Experience with mixed MPI/threaded programming models

    SciTech Connect

    May, J M; Supinski, B R

    1999-04-01

    A shared memory cluster is a parallel computer that consists of multiple nodes connected through an interconnection network. Each node is a symmetric multiprocessor (SMP) unit in which multiple CPUs share uniform access to a pool of main memory. The SGI Origin 2000, Compaq (formerly DEC) AlphaServer Cluster, and recent IBM RS6000/SP systems are all variants of this architecture. The SGI Origin 2000 has hardware that allows tasks running on any processor to access any main memory location in the system, so all the memory in the nodes forms a single shared address space. This is called a nonuniform memory access (NUMA) architecture because it gives programs a single shared address space, but the access time to different memory locations varies. In the IBM and Compaq systems, each node's memory forms a separate address space, and tasks communicate between nodes by passing messages or using other explicit mechanisms. Many large parallel codes use standard MPI calls to exchange data between tasks in a parallel job, and this is a natural programming model for distributed memory architectures. On a shared memory architecture, message passing is unnecessary if the code is written to use multithreading: threads run in parallel on different processors, and they exchange data simply by reading and writing shared memory locations. Shared memory clusters combine architectural elements of both distributed memory and shared memory systems, and they support both message passing and multithreaded programming models. Application developers are now trying to determine which programming model is best for these machines. This paper presents initial results of a study aimed at answering that question. We interviewed developers representing nine scientific code groups at Lawrence Livermore National Laboratory (LLNL). All of these groups are attempting to optimize their codes to run on shared memory clusters, specifically the IBM and DEC platforms at LLNL. This paper will focus on ease

  2. INNOVATIVE FRESH WATER PRODUCTION PROCESS FOR FOSSIL FUEL PLANTS

    SciTech Connect

    James F. Klausner; Renwei Mei; Yi Li; Mohamed Darwish; Diego Acevedo; Jessica Knight

    2003-09-01

    This report describes the annual progress made in the development and analysis of a Diffusion Driven Desalination (DDD) system, which is powered by the waste heat from low pressure condensing steam in power plants. The desalination is driven by water vapor saturating dry air flowing through a diffusion tower. Liquid water is condensed out of the air/vapor mixture in a direct contact condenser. A thermodynamic analysis demonstrates that the DDD process can yield a fresh water production efficiency of 4.5% based on a feed water inlet temperature of only 50 C. An example is discussed in which the DDD process utilizes waste heat from a 100 MW steam power plant to produce 1.51 million gallons of fresh water per day. The main focus of the initial development of the desalination process has been on the diffusion tower. A detailed mathematical model for the diffusion tower has been described, and its numerical implementation has been used to characterize its performance and provide guidance for design. The analysis has been used to design a laboratory scale diffusion tower, which has been thoroughly instrumented to allow detailed measurements of heat and mass transfer coefficient, as well as fresh water production efficiency. The experimental facility has been described in detail.

  3. Three-Dimensional Numerical Modeling of Magnetohydrodynamic Augmented Propulsion Experiment

    NASA Technical Reports Server (NTRS)

    Turner, M. W.; Hawk, C. W.; Litchford, R. J.

    2009-01-01

    Over the past several years, NASA Marshall Space Flight Center has engaged in the design and development of an experimental research facility to investigate the use of diagonalized crossed-field magnetohydrodynamic (MHD) accelerators as a possible thrust augmentation device for thermal propulsion systems. In support of this effort, a three-dimensional numerical MHD model has been developed for the purpose of analyzing and optimizing accelerator performance and to aid in understanding critical underlying physical processes and nonideal effects. This Technical Memorandum fully summarizes model development efforts and presents the results of pretest performance optimization analyses. These results indicate that the MHD accelerator should utilize a 45deg diagonalization angle with the applied current evenly distributed over the first five inlet electrode pairs. When powered at 100 A, this configuration is expected to yield a 50% global efficiency with an 80% increase in axial velocity and a 50% increase in centerline total pressure.

  4. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  5. The human operator in manual preview tracking /an experiment and its modeling via optimal control/

    NASA Technical Reports Server (NTRS)

    Tomizuka, M.; Whitney, D. E.

    1976-01-01

    A manual preview tracking experiment and its results are presented. The preview drastically improves the tracking performance compared to zero-preview tracking. Optimal discrete finite preview control is applied to determine the structure of a mathematical model of the manual preview tracking experiment. Variable parameters in the model are adjusted to values which are consistent to the published data in manual control. The model with the adjusted parameters is found to be well correlated to the experimental results.

  6. Acoustic Modeling of the Monterey Bay Tomography Experiment

    DTIC Science & Technology

    1990-12-01

    26 A. INTRODUCTION TO EIGENRAY SEARCH TECHNIQ .; 3,S ......... 26 B. SEARCHING FOR EIGENRAYS...match measured arrivals times with the model raypaths. 8 3. Large enough temporal separation of eigenray arrivals to resolve individual rays (this...meters depth. [Ref. 6] Separating the MSC from the Soquel Submarine Canyon along the line-of- sight is a south-eastwardly sloping fan-like feature which

  7. High Speed Trimaran (HST) Seatrain Experiments, Model 5714

    DTIC Science & Technology

    2013-12-01

    maintain wave resistance similitude . This gave the model scale speeds as defined below. F - Vs - Vm (2) ™ — nrr~ ~ rrr— *• ’ y/gL’a V9Lm...version 1.0.854). This program works symbiotically with the engineer to develop fits to data without being restricted by standard mathematical ...rational B-spline (NURBS) program de- veloped in LabVIEW by Code 8500 personnel. NURBS allows for more complex curves than common mathematical

  8. Flow interaction experiment. Volume 1: Aerothermal modeling, phase 2

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Sullivan, J. P.; Murthy, S. N. B.

    1993-01-01

    An experimental and computational study is reported for the flow of a turbulent jet discharging into a rectangular enclosure. The experimental configurations consisting of primary jets only, annular jets only, and a combination of annular and primary jets are investigated to provide a better understanding of the flow field in an annular combustor. A laser Doppler velocimeter is used to measure mean velocity and Reynolds stress components. Major features of the flow field include recirculation, primary and annular jet interaction, and high turbulence. A significant result from this study is the effect the primary jets have on the flow field. The primary jets are seen to create statistically larger recirculation zones and higher turbulence levels. In addition, a technique called marker nephelometry is used to provide mean concentration values in the model combustor. Computations are performed using three levels of turbulence closures, namely k-epsilon model, algebraic second moment (ASM), and differential second moment (DSM) closure. Two different numerical schemes are applied. One is the lower-order power-law differencing scheme (PLDS) and the other is the higher-order flux-spline differencing scheme (FSDS). A comparison is made of the performance of these schemes. The numerical results are compared with experimental data. For the cases considered in this study, the FSDS is more accurate than the PLDS. For a prescribed accuracy, the flux-spline scheme requires a far fewer number of grid points. Thus, it has the potential for providing a numerical error-free solution, especially for three-dimensional flows, without requiring an excessively fine grid. Although qualitatively good comparison with data was obtained, the deficiencies regarding the modeled dissipation rate (epsilon) equation, pressure-strain correlation model, and the inlet epsilon profile and other critical closure issues need to be resolved before one can achieve the degree of accuracy required to

  9. Flow interaction experiment. Volume 2: Aerothermal modeling, phase 2

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Sullivan, J. P.; Murthy, S. N. B.

    1993-01-01

    An experimental and computational study is reported for the flow of a turbulent jet discharging into a rectangular enclosure. The experimental configurations consisting of primary jets only, annular jets only, and a combination of annular and primary jets are investigated to provide a better understanding of the flow field in an annular combustor. A laser Doppler velocimeter is used to measure mean velocity and Reynolds stress components. Major features of the flow field include recirculation, primary and annular jet interaction, and high turbulence. A significant result from this study is the effect the primary jets have on the flow field. The primary jets are seen to create statistically larger recirculation zones and higher turbulence levels. In addition, a technique called marker nephelometry is used to provide mean concentration values in the model combustor. Computations are performed using three levels of turbulence closures, namely k-epsilon model, algebraic second moment (ASM), and differential second moment (DSM) closure. Two different numerical schemes are applied. One is the lower-order power-law differencing scheme (PLDS) and the other is the higher-order flux-spline differencing scheme (FSDS). A comparison is made of the performance of these schemes. The numerical results are compared with experimental data. For the cases considered in this study, the FSDS is more accurate than the PLDS. For a prescribed accuracy, the flux-spline scheme requires a far fewer number of grid points. Thus, it has the potential for providing a numerical error-free solution, especially for three-dimensional flows, without requiring an excessively fine grid. Although qualitatively good comparison with data was obtained, the deficiencies regarding the modeled dissipation rate (epsilon) equation, pressure-strain correlation model, and the inlet epsilon profile and other critical closure issues need to be resolved before one can achieve the degree of accuracy required to

  10. Flooding Experiments and Modeling for Improved Reactor Safety

    SciTech Connect

    Solmos, M.; Hogan, K. J.; Vierow, K.

    2008-09-14

    Countercurrent two-phase flow and “flooding” phenomena in light water reactor systems are being investigated experimentally and analytically to improve reactor safety of current and future reactors. The aspects that will be better clarified are the effects of condensation and tube inclination on flooding in large diameter tubes. The current project aims to improve the level of understanding of flooding mechanisms and to develop an analysis model for more accurate evaluations of flooding in the pressurizer surge line of a Pressurized Water Reactor (PWR). Interest in flooding has recently increased because Countercurrent Flow Limitation (CCFL) in the AP600 pressurizer surge line can affect the vessel refill rate following a small break LOCA and because analysis of hypothetical severe accidents with the current flooding models in reactor safety codes shows that these models represent the largest uncertainty in analysis of steam generator tube creep rupture. During a hypothetical station blackout without auxiliary feedwater recovery, should the hot leg become voided, the pressurizer liquid will drain to the hot leg and flooding may occur in the surge line. The flooding model heavily influences the pressurizer emptying rate and the potential for surge line structural failure due to overheating and creep rupture. The air-water test results in vertical tubes are presented in this paper along with a semi-empirical correlation for the onset of flooding. The unique aspects of the study include careful experimentation on large-diameter tubes and an integrated program in which air-water testing provides benchmark knowledge and visualization data from which to conduct steam-water testing.

  11. Integrated modeling of cryogenic layered highfoot experiments at the NIF

    NASA Astrophysics Data System (ADS)

    Kritcher, A. L.; Hinkel, D. E.; Callahan, D. A.; Hurricane, O. A.; Clark, D.; Casey, D. T.; Dewald, E. L.; Dittrich, T. R.; Döppner, T.; Barrios Garcia, M. A.; Haan, S.; Berzak Hopkins, L. F.; Jones, O.; Landen, O.; Ma, T.; Meezan, N.; Milovich, J. L.; Pak, A. E.; Park, H.-S.; Patel, P. K.; Ralph, J.; Robey, H. F.; Salmonson, J. D.; Sepke, S.; Spears, B.; Springer, P. T.; Thomas, C. A.; Town, R.; Celliers, P. M.; Edwards, M. J.

    2016-05-01

    Integrated radiation hydrodynamic modeling in two dimensions, including the hohlraum and capsule, of layered cryogenic HighFoot Deuterium-Tritium (DT) implosions on the NIF successfully predicts important data trends. The model consists of a semi-empirical fit to low mode asymmetries and radiation drive multipliers to match shock trajectories, one dimensional inflight radiography, and time of peak neutron production. Application of the model across the HighFoot shot series, over a range of powers, laser energies, laser wavelengths, and target thicknesses predicts the neutron yield to within a factor of two for most shots. The Deuterium-Deuterium ion temperatures and the DT down scattered ratios, ratio of (10-12)/(13-15) MeV neutrons, roughly agree with data at peak fuel velocities <340 km/s and deviate at higher peak velocities, potentially due to flows and neutron scattering differences stemming from 3D or capsule support tent effects. These calculations show a significant amount alpha heating, 1-2.5× for shots where the experimental yield is within a factor of two, which has been achieved by increasing the fuel kinetic energy. This level of alpha heating is consistent with a dynamic hot spot model that is matched to experimental data and as determined from scaling of the yield with peak fuel velocity. These calculations also show that low mode asymmetries become more important as the fuel velocity is increased, and that improving these low mode asymmetries can result in an increase in the yield by a factor of several.

  12. Absorption of Deuterium in Palladium Rods: Model vs. Experiment

    DTIC Science & Technology

    1994-01-01

    7. It is unlikcl\\ 3.2. Supercharged regioti he The present model eniplos at simple. pa,,,si\\ inter- in phase. In actuality, ais indicated by Bucur ...8 R.V. Bucur and F. Bota, Electrochim. Acts, 29 (1984) 103. tA 9 H.-G. Fritsche, Z. Naturf. A, 38 (1983) 1118. ke Reflremes fC is ~I S. Szpak, CJ

  13. Model Deformation Measurement Technique NASA Langley HSR Experiences

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Wahls, R. A.; Owens, L. R.; Goad, W. K.

    1999-01-01

    Model deformation measurement techniques have been investigated and developed at NASA's Langley Research Center. The current technique is based upon a single video camera photogrammetric determination of two dimensional coordinates of wing targets with a fixed (and known) third dimensional coordinate, namely the spanwise location. Variations of this technique have been used to measure wing twist and bending at a few selected spanwise locations near the wing tip on HSR models at the National Transonic Facility, the Transonic Dynamics Tunnel, and the Unitary Plan Wind Tunnel. Automated measurements have been made at both the Transonic Dynamics Tunnel and at Unitary Plan Wind Tunnel during the past year. Automated measurements were made for the first time at the NTF during the recently completed HSR Reference H Test 78 in early 1996. A major problem in automation for the NTF has been the need for high contrast targets which do not exceed the stringent surface finish requirements. The advantages and limitations (including targeting) of the technique as well as the rationale for selection of this particular technique are discussed. Wing twist examples from the HSR Reference H model are presented to illustrate the run-to-run and test-to-test repeatability of the technique in air mode at the NTF. Examples of wing twist in cryogenic nitrogen mode at the NTF are also presented.

  14. Numerical experiments with model monophyletic and paraphyletic taxa

    NASA Technical Reports Server (NTRS)

    Sepkoski, J. J. Jr; Kendrick, D. C.; Sepkoski JJ, J. r. (Principal Investigator)

    1993-01-01

    The problem of how accurately paraphyletic taxa versus monophyletic (i.e., holophyletic) groups (clades) capture underlying species patterns of diversity and extinction is explored with Monte Carlo simulations. Phylogenies are modeled as stochastic trees. Paraphyletic taxa are defined in an arbitrary manner by randomly choosing progenitors and clustering all descendants not belonging to other taxa. These taxa are then examined to determine which are clades, and the remaining paraphyletic groups are dissected to discover monophyletic subgroups. Comparisons of diversity patterns and extinction rates between modeled taxa and lineages indicate that paraphyletic groups can adequately capture lineage information under a variety of conditions of diversification and mass extinction. This suggests that these groups constitute more than mere "taxonomic noise" in this context. But, strictly monophyletic groups perform somewhat better, especially with regard to mass extinctions. However, when low levels of paleontologic sampling are simulated, the veracity of clades deteriorates, especially with respect to diversity, and modeled paraphyletic taxa often capture more information about underlying lineages. Thus, for studies of diversity and taxic evolution in the fossil record, traditional paleontologic genera and families need not be rejected in favor of cladistically-defined taxa.

  15. Laser-induced periodic surface structures, modeling, experiments, and applications

    NASA Astrophysics Data System (ADS)

    Römer, G. R. B. E.; Skolski, J. Z. P.; Oboňa, J. Vincenc; Ocelík, V.; de Hosson, J. T. M.; Huis in't Veld, A. J.

    2014-03-01

    Laser-induced periodic surface structures (LIPSSs) consist of regular wavy surface structures, or ripples, with amplitudes and periodicity in the sub-micrometer range. A summary of experimentally observed LIPSSs is presented, as well as our model explaining their possible origin. Linearly polarized continuous wave (cw) or pulsed laser light, at normal incidence, can produce LIPSSs with a periodicity close to the laser wavelength, and direction orthogonal to the polarization on the surface of the material. Ripples with a periodicity (much) smaller than the laser wavelength develop when applying laser pulses with ultra-short durations in the femtosecond and picosecond regime. The direction of these ripples is either parallel or orthogonal to the polarization direction. Finally, when applying numerous pulses, structures with periodicity larger than the laser wavelength can form, which are referred to as "grooves". The physical origin of LIPSSs is still under debate. The strong correlation of the ripple periodicity to the laser wavelength, suggests that their formation can be explained by an electromagnetic approach. Recent results from a numerical electromagnetic model, predicting the spatially modulated absorbed laser energy, are discussed. This model can explain the origin of several characteristics of LIPSSs. Finally, applications of LIPSSs will be discussed.

  16. Pesticide uptake in potatoes: model and field experiments.

    PubMed

    Juraske, Ronnie; Vivas, Carmen S Mosquera; Velásquez, Alexander Erazo; Santos, Glenda García; Moreno, Mónica B Berdugo; Gomez, Jaime Diaz; Binder, Claudia R; Hellweg, Stefanie; Dallos, Jairo A Guerrero

    2011-01-15

    A dynamic model for uptake of pesticides in potatoes is presented and evaluated with measurements performed within a field trial in the region of Boyacá, Colombia. The model takes into account the time between pesticide applications and harvest, the time between harvest and consumption, the amount of spray deposition on soil surface, mobility and degradation of pesticide in soil, diffusive uptake and persistence due to crop growth and metabolism in plant material, and loss due to food processing. Food processing steps included were cleaning, washing, storing, and cooking. Pesticide concentrations were measured periodically in soil and potato samples from the beginning of tuber formation until harvest. The model was able to predict the magnitude and temporal profile of the experimentally derived pesticide concentrations well, with all measurements falling within the 90% confidence interval. The fraction of chlorpyrifos applied on the field during plant cultivation that eventually is ingested by the consumer is on average 10(-4)-10(-7), depending on the time between pesticide application and ingestion and the processing step considered.

  17. Mechanics of neutrophil phagocytosis: experiments and quantitative models.

    PubMed

    Herant, Marc; Heinrich, Volkmar; Dembo, Micah

    2006-05-01

    To quantitatively characterize the mechanical processes that drive phagocytosis, we observed the FcgammaR-driven engulfment of antibody-coated beads of diameters 3 mum to 11 mum by initially spherical neutrophils. In particular, the time course of cell morphology, of bead motion and of cortical tension were determined. Here, we introduce a number of mechanistic models for phagocytosis and test their validity by comparing the experimental data with finite element computations for multiple bead sizes. We find that the optimal models involve two key mechanical interactions: a repulsion or pressure between cytoskeleton and free membrane that drives protrusion, and an attraction between cytoskeleton and membrane newly adherent to the bead that flattens the cell into a thin lamella. Other models such as cytoskeletal expansion or swelling appear to be ruled out as main drivers of phagocytosis because of the characteristics of bead motion during engulfment. We finally show that the protrusive force necessary for the engulfment of large beads points towards storage of strain energy in the cytoskeleton over a large distance from the leading edge ( approximately 0.5 microm), and that the flattening force can plausibly be generated by the known concentrations of unconventional myosins at the leading edge.

  18. Mathematical Modeling of Eukaryotic Cell Migration: Insights Beyond Experiments

    PubMed Central

    Danuser, Gaudenz; Allard, Jun; Mogilner, Alex

    2014-01-01

    A migrating cell is a molecular machine made of tens of thousands of short-lived and interacting parts. Understanding migration means understanding the self-organization of these parts into a system of functional units. This task is one of tackling complexity: First, the system integrates numerous chemical and mechanical component processes. Second, these processes are connected in feedback interactions and over a large range of spatial and temporal scales. Third, many processes are stochastic, which leads to heterogeneous migration behaviors. Early on in the research of cell migration it became evident that this complexity exceeds human intuition. Thus, the cell migration community has led the charge to build mathematical models that could integrate the diverse experimental observations and measurements in consistent frameworks, first in conceptual and more recently in molecularly explicit models. The main goal of this review is to sift through a series of important conceptual and explicit mathematical models of cell migration and to evaluate their contribution to the field in their ability to integrate critical experimental data. PMID:23909278

  19. Comparison Between Keyhole Weld Model and Laser Welding Experiments

    SciTech Connect

    Wood, B C; Palmer, T A; Elmer, J W

    2002-09-23

    A series of laser welds were performed using a high-power diode-pumped continuous-wave Nd:YAG laser welder. In a previous study, the experimental results of those welds were examined, and the effects that changes in incident power and various welding parameters had on weld geometry were investigated. In this report, the fusion zones of the laser welds are compared with those predicted from a laser keyhole weld simulation model for stainless steels (304L and 21-6-9), vanadium, and tantalum. The calculated keyhole depths for the vanadium and 304L stainless steel samples fit the experimental data to within acceptable error, demonstrating the predictive power of numerical simulation for welds in these two materials. Calculations for the tantalum and 21-6-9 stainless steel were a poorer match to the experimental values. Accuracy in materials properties proved extremely important in predicting weld behavior, as minor changes in certain properties had a significant effect on calculated keyhole depth. For each of the materials tested, the correlation between simulated and experimental keyhole depths deviated as the laser power was increased. Using the model as a simulation tool, we conclude that the optical absorptivity of the material is the most influential factor in determining the keyhole depth. Future work will be performed to further investigate these effects and to develop a better match between the model and the experimental results for 21-6-9 stainless steel and tantalum.

  20. A model for successful research partnerships: a New Brunswick experience.

    PubMed

    Tamlyn, Karen; Creelman, Helen; Fisher, Garfield

    2002-01-01

    The purpose of this paper is to present an overview of a partnership model used to conduct a research study entitled "Needs of patients with cancer and their family members in New Brunswick Health Region 3 (NBHR3)" (Tamlyn-Leaman, Creelman, & Fisher, 1997). This partial replication study carried out by the three authors between 1995 and 1997 was a needs assessment, adapted with permission from previous work by Fitch, Vachon, Greenberg, Saltmarche, and Franssen (1993). In order to conduct a comprehensive needs assessment with limited resources, a partnership between academic, public, and private sectors was established. An illustration of this partnership is presented in the model entitled "A Client-Centred Partnership Model." The operations of this partnership, including the strengths, the perceived benefits, lessons learned by each partner, the barriers, and the process for conflict resolution, are described. A summary of the cancer care initiatives undertaken by NBHR3, which were influenced directly or indirectly by the recommendations from this study, is included.

  1. Using the Bifocal Modeling Framework to Resolve "Discrepant Events" between Physical Experiments and Virtual Models in Biology

    ERIC Educational Resources Information Center

    Blikstein, Paulo; Fuhrmann, Tamar; Salehi, Shima

    2016-01-01

    In this paper, we investigate an approach to supporting students' learning in science through a combination of physical experimentation and virtual modeling. We present a study that utilizes a scientific inquiry framework, which we call "bifocal modeling," to link student-designed experiments and computer models in real time. In this…

  2. Modeling and Depletion Simulations for a High Flux Isotope Reactor Cycle with a Representative Experiment Loading

    SciTech Connect

    Chandler, David; Betzler, Ben; Hirtz, Gregory John; Ilas, Germina; Sunny, Eva

    2016-09-01

    The purpose of this report is to document a high-fidelity VESTA/MCNP High Flux Isotope Reactor (HFIR) core model that features a new, representative experiment loading. This model, which represents the current, high-enriched uranium fuel core, will serve as a reference for low-enriched uranium conversion studies, safety-basis calculations, and other research activities. A new experiment loading model was developed to better represent current, typical experiment loadings, in comparison to the experiment loading included in the model for Cycle 400 (operated in 2004). The new experiment loading model for the flux trap target region includes full length 252Cf production targets, 75Se production capsules, 63Ni production capsules, a 188W production capsule, and various materials irradiation targets. Fully loaded 238Pu production targets are modeled in eleven vertical experiment facilities located in the beryllium reflector. Other changes compared to the Cycle 400 model are the high-fidelity modeling of the fuel element side plates and the material composition of the control elements. Results obtained from the depletion simulations with the new model are presented, with a focus on time-dependent isotopic composition of irradiated fuel and single cycle isotope production metrics.

  3. Modeling multiple experiments using regularized optimization: A case study on bacterial glucose utilization dynamics.

    PubMed

    Hartmann, András; Lemos, João M; Vinga, Susana

    2015-08-01

    The aim of inverse modeling is to capture the systems׳ dynamics through a set of parameterized Ordinary Differential Equations (ODEs). Parameters are often required to fit multiple repeated measurements or different experimental conditions. This typically leads to a multi-objective optimization problem that can be formulated as a non-convex optimization problem. Modeling of glucose utilization of Lactococcus lactis bacteria is considered using in vivo Nuclear Magnetic Resonance (NMR) measurements in perturbation experiments. We propose an ODE model based on a modified time-varying exponential decay that is flexible enough to model several different experimental conditions. The starting point is an over-parameterized non-linear model that will be further simplified through an optimization procedure with regularization penalties. For the parameter estimation, a stochastic global optimization method, particle swarm optimization (PSO) is used. A regularization is introduced to the identification, imposing that parameters should be the same across several experiments in order to identify a general model. On the remaining parameter that varies across the experiments a function is fit in order to be able to predict new experiments for any initial condition. The method is cross-validated by fitting the model to two experiments and validating the third one. Finally, the proposed model is integrated with existing models of glycolysis in order to reconstruct the remaining metabolites. The method was found useful as a general procedure to reduce the number of parameters of unidentifiable and over-parameterized models, thus supporting feature selection methods for parametric models.

  4. Effect of 2004 tsunami on groundwater in a coastal aquifer of Sri Lanka: Tank experiments, field observations and numerical modelling (Invited)

    NASA Astrophysics Data System (ADS)

    Vithanage, M. S.; Engesgaard, P. K.; Jensen, K. H.; Obeysekera, J.; Villholth, K. G.; Illangasekare, T. H.

    2009-12-01

    December 2004 tsunami provided a motivation to study the impact of the tsunami on shallow groundwater systems in the east coast of Sri Lanka. This particular natural hazard devastated the coastal aquifer systems of many countries in the region. Field investigations were carried out in a transect, located at the eastern coast Sri Lanka, perpendicular to the coastline on a 2.4 km wide sand stretch bounded by the sea and a lagoon and was partly destructed by the wave. Measurements of groundwater table and electrical conductivity of the groundwater were carried out from October, 2005 to September, 2006. Also, few physical experiments were undertaken in an intermediate scale tank (5 m long, 1.2 m tall and 0.05 m width) with three different subsurface configurations. Physical experimental setup, the field aquifer system and saltwater contamination was modeled using HST3D, a variable density flow and solute transport code, based on observations made in the field with the aim to understand the tsunami plume behavior and estimate the aquifer cleansing time. Physical experiments demonstrated that the tsunami saltwater plume entered into the aquifer is highly unstable and the flush-out times depend on the hydrostratigraphy of the media. Also fresh water recharge pushes the saltwater deeper into the aquifer slowing the total aquifer flush-out time. Electrical conductivity values in the field showed a reduction with the monsoonal rainfall following the tsunami while the rate of reduction was low during the dry season. With the freshwater recharge by the monsoon rainfall, the upper part of the aquifer (top 4.5 m) had returned to fresh groundwater conditions (EC<1000 μS/cm) around mid 2007. Although the top 6 m of the aquifer becomes fresh (<1000 μS/cm) in 5 years, it may take up to more than 15 years for the whole aquifer to fully clean. Also, the EC and some chemical parameters in the field showed that the post-tsunami well cleaning and pumping has likely led to retention of

  5. Methods for Chemical Analysis of Fresh Waters.

    ERIC Educational Resources Information Center

    Golterman, H. L.

    This manual, one of a series prepared for the guidance of research workers conducting studies as part of the International Biological Programme, contains recommended methods for the analysis of fresh water. The techniques are grouped in the following major sections: Sample Taking and Storage; Conductivity, pH, Oxidation-Reduction Potential,…

  6. Dielectric Spectroscopy of Fresh Chicken Breast Meat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical abstract The dielectric properties of fresh chicken breast meat were measured at temperatures from 5 to 85 degrees °C over the frequency range from 10 MHz to 1.8 GHz by dielectric spectroscopy techniques with an open-ended coaxial-line probe and impedance analyzer. Samples were cut from ...

  7. Dielectric Spectroscopy of Fresh Chicken Breast Meat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The dielectric properties of fresh chicken breast meat were measured at temperatures from 5 to 85 'C over the frequency range from 10 MHz to 1.8 GHz by dielectric spectroscopy techniques with an open-ended coaxial-line probe and impedance analyzer. Samples were cut from both the Pectoralis major an...

  8. Cultivable microbiome of fresh white button mushrooms.

    PubMed

    Rossouw, W; Korsten, L

    2017-02-01

    Microbial dynamics on commercially grown white button mushrooms is of importance in terms of food safety assurance and quality control. The purpose of this study was to establish the microbial profile of fresh white button mushrooms. The total microbial load was determined through standard viable counts. Presence and isolation of Gram-negative bacteria including coagulase-positive Staphylococci were performed using a selective enrichment approach. Dominant and presumptive organisms were confirmed using molecular methods. Total mushroom microbial counts ranged from 5·2 to 12·4 log CFU per g, with the genus Pseudomonas being most frequently isolated (45·37% of all isolations). In total, 91 different microbial species were isolated and identified using Matrix-assisted laser desorption ionization-time of flight mass spectrophotometry, PCR and sequencing. Considering current food safety guidelines in South Africa for ready-to-eat fresh produce, coliform counts exceeded the guidance specifications for fresh fruit and vegetables. Based on our research and similar studies, it is proposed that specifications for microbial loads on fresh, healthy mushrooms reflect a more natural microbiome at the point-of-harvest and point-of-sale.

  9. Breeding lettuce for fresh-cut processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Lettuce is increasingly consumed in fresh-cut packaged salads. New cultivars specifically bred for this use can enhance production and processing efficiency and extend shelf life. Cultivars with novel head architectures and leaf traits are being released by private and public breeding programs with ...

  10. Storytime with Fresh Professor, Part Two

    ERIC Educational Resources Information Center

    Miles, James

    2016-01-01

    I wasn't always the Fresh Professor. At one point I was just another starving actor trying to make a living. But stories change over time, as do professional desires. This is "Part Two" of my story. Enjoy the ride. [For "Part One," see EJ1114154.

  11. Storytime with Fresh Professor, Part One

    ERIC Educational Resources Information Center

    Miles, James

    2016-01-01

    James Miles writes that he wasn't always the Fresh Professor. At one point, he was just another starving actor, trying to make a living. But stories change over time, as do professional desires. This article presents Part One of his story.

  12. Availability modeling approach for future circular colliders based on the LHC operation experience

    NASA Astrophysics Data System (ADS)

    Niemi, Arto; Apollonio, Andrea; Gutleber, Johannes; Sollander, Peter; Penttinen, Jussi-Pekka; Virtanen, Seppo

    2016-12-01

    Reaching the challenging integrated luminosity production goals of a future circular hadron collider (FCC-hh) and high luminosity LHC (HL-LHC) requires a thorough understanding of today's most powerful high energy physics research infrastructure, the LHC accelerator complex at CERN. FCC-hh, a 4 times larger collider ring aims at delivering 10 - 20 ab-1 of integrated luminosity at 7 times higher collision energy. Since the identification of the key factors that impact availability and cost is far from obvious, a dedicated activity has been launched in the frame of the future circular collider study to develop models to study possible ways to optimize accelerator availability. This paper introduces the FCC reliability and availability study, which takes a fresh new look at assessing and modeling reliability and availability of particle accelerator infrastructures. The paper presents a probabilistic approach for Monte Carlo simulation of the machine operational cycle, schedule and availability for physics. The approach is based on best-practice, industrially applied reliability analysis methods. It relies on failure rate and repair time distributions to calculate impacts on availability. The main source of information for the study is coming from CERN accelerator operation and maintenance data. Recent improvements in LHC failure tracking help improving the accuracy of modeling of LHC performance. The model accuracy and prediction capabilities are discussed by comparing obtained results with past LHC operational data.

  13. FRETsg: Biomolecular structure model building from multiple FRET experiments

    NASA Astrophysics Data System (ADS)

    Schröder, G. F.; Grubmüller, H.

    2004-04-01

    Fluorescence energy transfer (FRET) experiments of site-specifically labelled proteins allow one to determine distances between residues at the single molecule level, which provide information on the three-dimensional structural dynamics of the biomolecule. To systematically extract this information from the experimental data, we describe a program that generates an ensemble of configurations of residues in space that agree with the experimental distances between these positions. Furthermore, a fluctuation analysis allows to determine the structural accuracy from the experimental error. Program summaryTitle of program: FRETsg Catalogue identifier: ADTU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTU Computer: SGI Octane, Pentium II/III, Athlon MP, DEC Alpha Operating system: Unix, Linux, Windows98/NT/XP Programming language used: ANSI C No. of bits in a word: 32 or 64 No. of processors used: 1 No. of bytes in distributed program, including test data, etc.: 11407 No. of lines in distributed program, including test data, etc.: 1647 Distribution format: gzipped tar file Nature of the physical problem: Given an arbitrary number of distance distributions between an arbitrary number of points in three-dimensional space, find all configurations (set of coordinates) that obey the given distances. Method of solution: Each distance is described by a harmonic potential. Starting from random initial configurations, their total energy is minimized by steepest descent. Fluctuations of positions are chosen to generate distance distribution widths that best fit the given values.

  14. Contact fatigue of human enamel: Experiments, mechanisms and modeling.

    PubMed

    Gao, S S; An, B B; Yahyazadehfar, M; Zhang, D; Arola, D D

    2016-07-01

    Cyclic contact between natural tooth structure and engineered ceramics is increasingly common. Fatigue of the enamel due to cyclic contact is rarely considered. The objectives of this investigation were to evaluate the fatigue behavior of human enamel by cyclic contact, and to assess the extent of damage over clinically relevant conditions. Cyclic contact experiments were conducted using the crowns of caries-free molars obtained from young donors. The cuspal locations were polished flat and subjected to cyclic contact with a spherical indenter of alumina at 2Hz. The progression of damage was monitored through the evolution in contact displacement, changes in the contact hysteresis and characteristics of the fracture pattern. The contact fatigue life diagram exhibited a decrease in cycles to failure with increasing cyclic load magnitude. Two distinct trends were identified, which corresponded to the development and propagation of a combination of cylindrical and radial cracks. Under contact loads of less than 400N, enamel rod decussation resisted the growth of subsurface cracks. However, at greater loads the damage progressed rapidly and accelerated fatigue failure. Overall, cyclic contact between ceramic appliances and natural tooth structure causes fatigue of the enamel. The extent of damage is dependent on the magnitude of cyclic stress and the ability of the decussation to arrest the fatigue damage.

  15. Numerical modeling of oxygen exclusion experiments of anaerobic bioventing

    NASA Astrophysics Data System (ADS)

    Mihopoulos, Philip G.; Suidan, Makram T.; Sayles, Gregory D.; Kaskassian, Sebastien

    2002-10-01

    A numerical and experimental study of transport phenomena underlying anaerobic bioventing (ABV) is presented. Understanding oxygen exclusion patterns in vadose zone environments is important in designing an ABV process for bioremediation of soil contaminated with chlorinated solvents. In particular, the establishment of an anaerobic zone of influence by nitrogen injection in the vadose zone is investigated. Oxygen exclusion experiments are performed in a pilot scale flow cell (2×1.1×0.1 m) using different venting flows and two different outflow boundary conditions (open and partially covered). Injection gas velocities are varied from 0.25×10 -3 to 1.0×10 -3 cm/s and are correlated with the ABV radius of influence. Numerical simulations are used to predict the collected experimental data. In general, reasonable agreement is found between observed and predicted oxygen concentrations. Use of impervious covers can significantly reduce the volume of forcing gas used, where an increase in oxygen exclusion efficiency is consistent with a decrease in the outflow area above the injection well.

  16. Numerical modeling of oxygen exclusion experiments of anaerobic bioventing.

    PubMed

    Mihopoulos, Philip G; Suidan, Makram T; Sayles, Gregory D; Kaskassian, Sebastien

    2002-10-01

    A numerical and experimental study of transport phenomena underlying anaerobic bioventing (ABV) is presented. Understanding oxygen exclusion patterns in vadose zone environments is important in designing an ABV process for bioremediation of soil contaminated with chlorinated solvents. In particular, the establishment of an anaerobic zone of influence by nitrogen injection in the vadose zone is investigated. Oxygen exclusion experiments are performed in a pilot scale flow cell (2 x 1.1 x 0.1 m) using different venting flows and two different outflow boundary conditions (open and partially covered). Injection gas velocities are varied from 0.25 x 10(-3) to 1.0 x 10(-3) cm/s and are correlated with the ABV radius of influence. Numerical simulations are used to predict the collected experimental data. In general, reasonable agreement is found between observed and predicted oxygen concentrations. Use of impervious covers can significantly reduce the volume of forcing gas used, where an increase in oxygen exclusion efficiency is consistent with a decrease in the outflow area above the injection well.

  17. Development of Artificial Lenses of Fresh Groundwater in Desert Conditions

    NASA Astrophysics Data System (ADS)

    Yakirevich, A.; Kuznetsov, M.; Sorek, S.; Mamiyeva, I.

    2004-12-01

    A significant proportion of the world's deserts is covered by soils characterized by low hydraulic conductivity, high runoff coefficient and high levels of salinity. The groundwater is usually also saline in such regions. It has been proposed to use clayey watersheds for collecting runoff water during seasonal precipitation and infiltrating it into the saline water table, thus creating an artificial lens of fresh groundwater (ALFGW). The National Institute of Deserts, Flora and Fauna of Turkmenistan constructed in the Kara-Kum Desert a pilot system for ALFGW formation (infiltration basin with recharging wells). It was found that over 3-4 years (with a mean annual runoff of 10,000-15,000 m3/km), about 20,000-25,000 m3 of surface water could be infiltrated. This created a lens of maximum 7 m thickness that could be used as a fresh water reservoir. To understand the process associated with ALFGW formation and its pumping, we applied a mathematical model of density driven flow and transport in the unsaturated-saturated zones. The FEFLOW code was used for simulations and the model was calibrated with available field data. It was found that there is a relatively sharp interface between the ALFGW bottom and the saline groundwater, while changes in water salinity are minor within the ALFGW. Simulations of the ALFGW indicated that, with time, a vortex flow develops under the lens edges. This leads to an increase in mixing between the fresh and saline water zones, thus increasing the lens areal extent while decreasing its thickness. The process mainly depends on the hydraulic parameters and the water infiltration regime. Lowering the hydraulic conductivity and the infiltration rate, leads to an increase in the ALFGW; however, increasing the infiltration time raises water losses by evaporation. During pumping of the ALFGW, up-coning of saline water occurs, which depends on the pumping rate, the well screen parameters, well location, and ALFGW characteristics. Using two

  18. Laser-Induced Ignition Modeling and Comparison with Experiments

    NASA Astrophysics Data System (ADS)

    Dors, Ivan; Qin, W.; Chen, Y.-L.; Parigger, C.; Lewis, J. W. L.

    2000-11-01

    We have studied experimentally the ignition resulting from optical breakdowns in mixtures of oxygen and the fuel ammonia induced by a 10 nanosecond pulsewidth laser for a time of hundreds of milliseconds using laser spectroscopy. In these studies, we have for the first time characterized the laser-induced plasma, the formation of the combustion radicals, the detonation wave, the flame front and the combustion process itself. The objective of the modeling is to understand the fluid dynamic and chemical kinetic effects following the nominal 10 ns laser pulse until 1 millisecond after laser breakdown. The calculated images match the experimentally recorded data sets and show spatial details covering volumes of 1/10000 cc to 1000 cc. The code was provided by CFD Research Corporation of Huntsville, Alabama, and was appropriately augmented to compute the observed phenomena. The fully developed computational model now includes a kinetic mechanism that implements plasma equilibrium kinetics in ionized regions, and non-equilibrium, multistep, finite rate reactions in non-ionized regions. The predicted fluid phenomena agree with various flow patterns characteristic of laser spark ignition as measured in the CLA laboratories. Comparison of calculated and measured OH and NH concentration will be presented.

  19. Experiments and Modeling of High Altitude Chemical Agent Release

    SciTech Connect

    Nakafuji, G.; Greenman, R.; Theofanous, T.

    2002-07-08

    Using ASCA data, we find, contrary to other researchers using ROSAT data, that the X-ray spectra of the VY Scl stars TT Ari and KR Aur are poorly fit by an absorbed blackbody model but are well fit by an absorbed thermal plasma model. The different conclusions about the nature of the X-ray spectrum of KR Aur may be due to differences in the accretion rate, since this Star was in a high optical state during the ROSAT observation, but in an intermediate optical state during the ASCA observation. TT Ari, on the other hand, was in a high optical state during both observations, so directly contradicts the hypothesis that the X-ray spectra of VY Sol stars in their high optical states are blackbodies. Instead, based on theoretical expectations and the ASCA, Chandra, and XMM spectra of other nonmagnetic cataclysmic variables, we believe that the X-ray spectra of VY Sol stars in their low and high optical states are due to hot thermal plasma in the boundary layer between the accretion disk and the surface of the white dwarf, and appeal to the acquisition of Chandra and XMM grating spectra to test this prediction.

  20. Non linear dynamics of flame cusps: from experiments to modeling

    NASA Astrophysics Data System (ADS)

    Almarcha, Christophe; Radisson, Basile; Al-Sarraf, Elias; Quinard, Joel; Villermaux, Emmanuel; Denet, Bruno; Joulin, Guy

    2016-11-01

    The propagation of premixed flames in a medium initially at rest exhibits the appearance and competition of elementary local singularities called cusps. We investigate this problem both experimentally and numerically. An analytical solution of the two-dimensional Michelson Sivashinsky equation is obtained as a composition of pole solutions, which is compared with experimental flames fronts propagating between glass plates separated by a thin gap width. We demonstrate that the front dynamics can be reproduced numerically with a good accuracy, from the linear stages of destabilization to its late time evolution, using this model-equation. In particular, the model accounts for the experimentally observed steady distribution of distances between cusps, which is well-described by a one-parameter Gamma distribution, reflecting the aggregation type of interaction between the cusps. A modification of the Michelson Sivashinsky equation taking into account gravity allows to reproduce some other special features of these fronts. Aix-Marseille Univ., IRPHE, UMR 7342 CNRS, Centrale Marseille, Technopole de Château Gombert, 49 rue F. Joliot Curie, 13384 Marseille Cedex 13, France.

  1. Model Cilia - Experiments with Biomimetic Actuable Structures and Surfaces

    NASA Astrophysics Data System (ADS)

    Lloyd Carroll, R.

    2005-03-01

    The use of cilia to drive fluid flow is a common motif in living organisms, and in the tissues of higher organisms. By understanding the ways that cilia function (or do not function), potential therapies to treat human diseases (such as cystic fibrosis) may be devised. The complex hydrodynamics of flow in beating ciliary tissues (such as lung epithelial tissues) are challenging to study in cultured tissues, suggesting the need for model systems that will mimic the morphology and beat patterns of living systems. To reach this goal, we have fabricated high aspect ratio cilia-like structures with dimensions similar to those of a lung epithelial cilium (0.2 to 2.0 μm diameter by ˜6 to 10 μm long). The structures and surfaces are composed of a magneto-elastomeric nanocomposite, allowing the actuation of artificial cilia by magnetic fields. We have studied the flexibility of the materials under conditions of flow (in microfluidics channels), and will present theoretical and experimental data from various efforts at actuation. We will discuss details of the fabrication of the ciliated structures and present results of mechanical characterization. The impact of this work on the understanding of fluid flow above ciliated cells and tissues and potential applications of such model systems will also be described.

  2. Proof of concept of an artificial muscle: theoretical model, numerical model, and hardware experiment.

    PubMed

    Haeufle, D F B; Günther, M; Blickhan, R; Schmitt, S

    2011-01-01

    Recently, the hyperbolic Hill-type force-velocity relation was derived from basic physical components. It was shown that a contractile element CE consisting of a mechanical energy source (active element AE), a parallel damper element (PDE), and a serial element (SE) exhibits operating points with hyperbolic force-velocity dependency. In this paper, the contraction dynamics of this CE concept were analyzed in a numerical simulation of quick release experiments against different loads. A hyperbolic force-velocity relation was found. The results correspond to measurements of the contraction dynamics of a technical prototype. Deviations from the theoretical prediction could partly be explained by the low stiffness of the SE, which was modeled analog to the metal spring in the hardware prototype. The numerical model and hardware prototype together, are a proof of this CE concept and can be seen as a well-founded starting point for the development of Hill-type artificial muscles. This opens up new vistas for the technical realization of natural movements with rehabilitation devices.

  3. The DASCh Experience: How to Model a Supply Chain. Chapter 1

    DTIC Science & Technology

    1998-01-01

    Chapter 1 The DASCh Experience: How to Model a Supply Chain H. Van Dyke Parunak Center for Electronic Commerce, ERIM 3600 Green Court, Suite 550... supply chain modeling, we have found agent-based modeling to be more flexible than ODE models for basic exploration. One phenomenon we discovered...Section 2 of this paper describes the supply chain problem. Section 3 reports the three models that we constructed. Section 4 reviews the roles of each

  4. Modal Test Experiences with a Jet Engine Fan Model

    NASA Astrophysics Data System (ADS)

    HOLLKAMP, J. J.; GORDON, R. W.

    2001-11-01

    High cycle fatigue in jet engine blades is caused by excessive vibration. Understanding the dynamic response of the bladed disk system is important in determining vibration levels. Modal testing is a useful tool in understanding the dynamic behavior of structures. However, modal tests are not conducted on bladed disks because of the difficulties involved. One problem is that the overall dynamic behavior is sensitive to small perturbations. Another problem is that multiple inputs and high-resolution techniques are required to separate modes that are nearly repeated. Two studies of engine blade response were recently completed in which bench modal tests were successfully performed on simplified fan models. The modal test procedures for the first study were successful in extracting the modal parameters. But the tests in the second study were more demanding. Ultimately, an approach was devised that accurately extracted the modal parameters. This paper describes the challenges and the evolution of the test procedures.

  5. Hydrocarbon adsorption on gold clusters: Experiment and quantum chemical modeling

    NASA Astrophysics Data System (ADS)

    Lanin, S. N.; Pichugina, D. A.; Shestakov, A. F.; Smirnov, V. V.; Nikolaev, S. A.; Lanina, K. S.; Vasil'Kov, A. Yu.; Zung, Fam Tien; Beletskaya, A. V.

    2010-12-01

    Heats of adsorption Q of n-alkanes C6-C9 on ZrO2 modified with gold and nickel nanoparticles were determined experimentally. The Q values were found to be higher on average by 7 kJ/mol on the modified samples as compared to the pure support. Density functional theory with the PBE functional and the pseudopotential for gold effectively allowing for relativistic corrections was used to model the adsorption of saturated hydrocarbons on Au and Au + Ni, as exemplified by the interaction of alkanes C1-C3 with Au m , Au m - 1Ni ( m = 3, 4, 5) clusters as well as the interaction of C1-C8 with Au20. Based on the calculation results, the probable coordination centers of alkanes on nanoparticle surfaces were found to be vertices and edges, whereas face localization was less probable.

  6. Overview of the NRL DPF program: Experiment and Modeling

    NASA Astrophysics Data System (ADS)

    Richardson, A. S.; Jackson, S. L.; Angus, J. R.; Giuliani, J. L.; Swanekamp, S. B.; Schumer, J. W.; Mosher, D.

    2016-10-01

    Charged particle acceleration in imploding plasmas is an important phenomenon which occurs in various natural and laboratory plasmas. A new research project at the Naval Research Laboratory (NRL) has been started to investigate this phenomenon both experimentally-in a dense plasma focus (DPF) device-and theoretically using analytical and computational modeling. The DPF will be driven by the high-inductance (607 nH) Hawk pulsed-power generator, with a rise time of 1.2 μs and a peak current of 665 kA. In this poster we present an overview of the research project, and some preliminary results from fluid simulations of the m = 0 instability in an idealized DPF pinch. This work was supported by the Naval Research Laboratory Base Program.

  7. Experiments on stiffened conical shell structures using cast epoxy models

    NASA Technical Reports Server (NTRS)

    Williams, J. G.; Davis, R. C.

    1973-01-01

    Description of a casting technique for fabricating high-quality plastic structural models, and review of results regarding the use of such specimens to parametrically study the effect of base ring stiffness on the critical buckling pressure of a ring-stiffened conical shell. The fabrication technique involves machining a metal mold to the desired configuration and vacuum-drawing the plastic material into the mold. A room-temperature curing translucent thermoset epoxy was the casting material selected. A shell of revolution computer program which employs a nonlinear axisymmetric prebuckling strain field to obtain a bifurcation buckling solution was used to guide the selection of congifurations tested. The shell experimentally exhibited asymmetric collapse behavior, and the ultimate load was considerably higher than the analytical bifurcation prediction. The asymmetric buckling mode shape, however, initially appeared at a pressure near the analysis bifurcation solution.

  8. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model

    NASA Astrophysics Data System (ADS)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.

    2013-12-01

    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient

  9. Thermosiphon-based PCR reactor: experiment and modeling.

    PubMed

    Chen, Zongyuan; Qian, Shizhi; Abrams, William R; Malamud, Daniel; Bau, Haim H

    2004-07-01

    A self-actuated, flow-cycling polymerase chain reaction (PCR) reactor that takes advantage of buoyancy forces to continuously circulate reagents in a closed loop through various thermal zones has been constructed, tested, and modeled. The heating required for the PCR is advantageously used to induce fluid motion without the need for a pump. Flow velocities on the order of millimeters per second are readily attainable. In our preliminary prototype, we measured a cross-sectionally averaged velocity of 2.5 mm/s and a cycle time of 104 s. The flow velocity is nearly independent of the loop's length, making the device readily scalable. Successful amplifications of 700- and 305-bp fragments of Bacillus cereus genomic DNA have been demonstrated. Since the device does not require any moving parts, it is particularly suitable for miniature systems.

  10. Modeling Gene-Environment Interactions With Quasi-Natural Experiments.

    PubMed

    Schmitz, Lauren; Conley, Dalton

    2017-02-01

    This overview develops new empirical models that can effectively document Gene × Environment (G×E) interactions in observational data. Current G×E studies are often unable to support causal inference because they use endogenous measures of the environment or fail to adequately address the nonrandom distribution of genes across environments, confounding estimates. Comprehensive measures of genetic variation are incorporated into quasi-natural experimental designs to exploit exogenous environmental shocks or isolate variation in environmental exposure to avoid potential confounders. In addition, we offer insights from population genetics that improve upon extant approaches to address problems from population stratification. Together, these tools offer a powerful way forward for G×E research on the origin and development of social inequality across the life course.

  11. Performance of Nanotube-Based Ceramic Composites: Modeling and Experiment

    NASA Technical Reports Server (NTRS)

    Curtin, W. A.; Sheldon, B. W.; Xu, J.

    2004-01-01

    The excellent mechanical properties of carbon-nanotubes are driving research into the creation of new strong, tough nanocomposite systems. In this program, our initial work presented the first evidence of toughening mechanisms operating in carbon-nanotube- reinforced ceramic composites using a highly-ordered array of parallel multiwall carbon-nanotubes (CNTs) in an alumina matrix. Nanoindentation introduced controlled cracks and the damage was examined by SEM. These nanocomposites exhibit the three hallmarks of toughening in micron-scale fiber composites: crack deflection at the CNT/matrix interface; crack bridging by CNTs; and CNT pullout on the fracture surfaces. Furthermore, for certain geometries a new mechanism of nanotube collapse in shear bands was found, suggesting that these materials can have multiaxial damage tolerance. The quantitative indentation data and computational models were used to determine the multiwall CNT axial Young's modulus as 200-570 GPa, depending on the nanotube geometry and quality.

  12. Experiments for calibration and validation of plasticity and failure material modeling: 304L stainless steel.

    SciTech Connect

    Lee, Kenneth L.; Korellis, John S.; McFadden, Sam X.

    2006-01-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.

  13. The International Heat Stress Genotype Experiment for modeling wheat response to heat: field experiments and AgMIP-Wheat multi-model simulations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The data set contains a portion of the International Heat Stress Genotype Experiment (IHSGE) data used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat crop models and quantify the impact of heat on global wheat yield productivity. It includes two spring wheat cultivars grown during...

  14. Applying reactive models to column experiments to assess the hydrogeochemistry of seawater intrusion: Optimising ACUAINTRUSION and selecting cation exchange coefficients with PHREEQC

    NASA Astrophysics Data System (ADS)

    Boluda-Botella, N.; Valdes-Abellan, J.; Pedraza, R.

    2014-03-01

    Three sets of laboratory column experimental results concerning the hydrogeochemistry of seawater intrusion have been modelled using two codes: ACUAINTRUSION (Chemical Engineering Department, University of Alicante) and PHREEQC (U.S.G.S.). These reactive models utilise the hydrodynamic parameters determined using the ACUAINTRUSION TRANSPORT software and fit the chloride breakthrough curves perfectly. The ACUAINTRUSION code was improved, and the instabilities were studied relative to the discretisation. The relative square errors were obtained using different combinations of the spatial and temporal steps: the global error for the total experimental data and the partial error for each element. Good simulations for the three experiments were obtained using the ACUAINTRUSION software with slight variations in the selectivity coefficients for both sediments determined in batch experiments with fresh water. The cation exchange parameters included in ACUAINTRUSION are those reported by the Gapon convention with modified exponents for the Ca/Mg exchange. PHREEQC simulations performed using the Gains-Thomas convention were unsatisfactory, with the exchange coefficients from the database of PHREEQC (or range), but those determined with fresh water - natural sediment allowed only an approximation to be obtained. For the treated sediment, the adjusted exchange coefficients were determined to improve the simulation and are vastly different from those from the database of PHREEQC or batch experiment values; however, these values fall in an order similar to the others determined under dynamic conditions. Different cation concentrations were simulated using two different software packages; this disparity could be attributed to the defined selectivity coefficients that affect the gypsum equilibrium. Consequently, different calculated sulphate concentrations are obtained using each type of software; a smaller mismatch was predicted using ACUAINTRUSION. In general, the presented

  15. (Im)precise nuclear forces: From experiment to model

    NASA Astrophysics Data System (ADS)

    Navarro Perez, Rodrigo

    2017-01-01

    The nuclear force is the most fundamental building block in nuclear science. It is the cornerstone of every nuclear application from nuclear reactors to the production of heavy elements in supernovae. Despite being rigorously derived from the Standard Model, the actual determination of the nuclear force requires adjusting a set of parameters to reproduce experimental data. This introduces uncertainties that need to be quantified and propagated into all nuclear applications. I'll review a series of works on the determination of the Nucleon-Nucleon interaction from a collection of over 8000 elastic scattering data. Statistical tools used on the selection of data and the propagation of statistical uncertainties will be presented. The implications for charge independence of the pion-nucleon coupling constant and the predictive power of chiral interactions will be discussed. Although this is not the final word on theoretical nuclear uncertainties, as other sources of errors should be explored, this series of works allow to set the foundations for a new era for uncertainty quantification in nuclear applications. This work was performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344. Funding was also provided by the U.S. Department of Energy, Office of Science, Award DE-SC0008511 (NUCLEI SciDAC Collaboration)

  16. Collective motion of cells: from experiments to models.

    PubMed

    Méhes, Előd; Vicsek, Tamás

    2014-09-01

    Swarming or collective motion of living entities is one of the most common and spectacular manifestations of living systems that have been extensively studied in recent years. A number of general principles have been established. The interactions at the level of cells are quite different from those among individual animals, therefore the study of collective motion of cells is likely to reveal some specific important features which we plan to overview in this paper. In addition to presenting the most appealing results from the quickly growing related literature we also deliver a critical discussion of the emerging picture and summarize our present understanding of collective motion at the cellular level. Collective motion of cells plays an essential role in a number of experimental and real-life situations. In most cases the coordinated motion is a helpful aspect of the given phenomenon and results in making a related process more efficient (e.g., embryogenesis or wound healing), while in the case of tumor cell invasion it appears to speed up the progression of the disease. In these mechanisms cells both have to be motile and adhere to one another, the adherence feature being the most specific to this sort of collective behavior. One of the central aims of this review is to present the related experimental observations and treat them in light of a few basic computational models so as to make an interpretation of the phenomena at a quantitative level as well.

  17. Antibacterial kaolinite/urea/chlorhexidine nanocomposites: Experiment and molecular modelling

    NASA Astrophysics Data System (ADS)

    Holešová, Sylva; Valášková, Marta; Hlaváč, Dominik; Madejová, Jana; Samlíková, Magda; Tokarský, Jonáš; Pazdziora, Erich

    2014-06-01

    Clay minerals are commonly used materials in pharmaceutical production both as inorganic carriers or active agents. The purpose of this study is the preparation and characterization of clay/antibacterial drug hybrids which can be further included in drug delivery systems for treatment oral infections. Novel nanocomposites with antibacterial properties were successfully prepared by ion exchange reaction from two types of kaolinite/urea intercalates and chlorhexidine diacetate. Intercalation compounds of kaolinite were prepared by reaction with solid urea in the absence of solvents (dry method) as well as with urea aqueous solution (wet method). The antibacterial activity of two prepared samples against Enterococcus faecalis, Staphylococcus aureus, Escherichia coli and Pseudomonas aeruginosa was evaluated by finding the minimum inhibitory concentration (MIC). Antibacterial studies of both samples showed the lowest MIC values (0.01%, w/v) after 1 day against E. faecalis, E. coli and S. aureus. A slightly worse antibacterial activity was observed against P. aeruginosa (MIC 0.12%, w/v) after 1 day. Since samples showed very good antibacterial activity, especially after 1 day of action, this means that these samples can be used as long-acting antibacterial materials. Prepared samples were characterized by X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR). The experimental data are supported by results of molecular modelling.

  18. Solute transport in intervertebral disc: experiments and finite element modeling.

    PubMed

    Das, D B; Welling, A; Urban, J P G; Boubriak, O A

    2009-04-01

    Loss of nutrient supply to the human intervertebral disc (IVD) cells is thought to be a major cause of disc degeneration in humans. To address this issue, transport of molecules of different size have been analyzed by a combination of experimental and modeling studies. Solute transport has been compared for steady-state and transient diffusion of several different solutes with molecular masses in the range 3-70 kDa, injected into parts of the disc where degeneration is thought most likely to occur first and into the blood supply to the disc. Diffusion coefficients of fluorescently tagged dextran molecules of different molecular weights have been measured in vitro using the concentration gradient technique in thin specimens of disc outer annulus and nucleus pulposus. Diffusion coefficients were found to decrease with molecular weight following a nonlinear relationship. Diffusion coefficients changed more rapidly for solutes with molecular masses less than 10 kDa. Although unrealistic or painful, solutes injected directly into the disc achieve the largest disc coverage with concentrations that would be high enough to be of practical use. Although more practical, solutes injected into the blood supply do not penetrate to the central regions of the disc and their concentrations dissipate more rapidly. Injection into the disc would be the best method to get drugs or growth factors to regions of degeneration in IVDs quickly; else concentrations of solute must be kept at a high value for several hours in the blood supply to the discs.

  19. Model experiments of steam stimulation in Nigerian tar sands

    SciTech Connect

    Omole, O.; Omolara, D.A. )

    1988-01-01

    The possibility of producing heavy oil from the Nigerian tar sand deposit by steam stimulation was investigated in the laboratory using a scaled and five unscaled physical models (tar sands packs). The effect of oil saturation and different matrix grain size on oil recovery were also studied. A fabricated 91.44 cm (diameter), 33 cm (high) high pressure cast iron vessel (prototype scaled down by a factor of 104), a 15 cm (diameter), 22.1 cm (high) high pressure stainless steel vessel, and two pressure reducing valves were used for the study. Steam was obtained from a locally fabricated boiler. Heavy oil was obtained from oil seeping from the deposit. The result of the study showed that heavy oil could be produced from the section of the deposit containing mobile heavy oil by steam stimulation. When the same amount of steam was injected into similar sand packs containing different oil saturation, the highest oil recovery was obtained from the sandpack with the lowest oil saturation. This implies that more steam will be required to produce from highly saturated heavy oil deposits. A greater amount of oil was produced from a sand pack with larger matrix grain size than from another sand pack with smaller matrix grain size for the same oil saturation steam quantity, and quality.

  20. Exocytotic dynamics in human chromaffin cells: experiments and modeling.

    PubMed

    Albillos, Almudena; Gil, Amparo; González-Vélez, Virginia; Pérez-Álvarez, Alberto; Segura, Javier; Hernández-Vivanco, Alicia; Caba-González, José Carlos

    2013-02-01

    Chromaffin cells have been widely used to study neurosecretion since they exhibit similar calcium dependence of several exocytotic steps as synaptic terminals do, but having the enormous advantage of being neither as small or fast as neurons, nor as slow as endocrine cells. In the present study, secretion associated to experimental measurements of the exocytotic dynamics in human chromaffin cells of the adrenal gland was simulated by using a model that combines stochastic and deterministic approaches for short and longer depolarizing pulses, respectively. Experimental data were recorded from human chromaffin cells, obtained from healthy organ donors, using the perforated patch configuration of the patch-clamp technique. We have found that in human chromaffin cells, secretion would be mainly managed by small pools of non-equally fusion competent vesicles, slowly refilled over time. Fast secretion evoked by brief pulses can be predicted only when 75% of one of these pools (the "ready releasable pool" of vesicles, abbreviated as RRP) are co-localized to Ca²⁺ channels, indicating an immediately releasable pool in the range reported for isolated cells of bovine and rat (Álvarez and Marengo, J Neurochem 116:155-163, 2011). The need for spatial correlation and close proximity of vesicles to Ca²⁺ channels suggests that in human chromaffin cells there is a tight control of those releasable vesicles available for fast secretion.

  1. Current understanding of divertor detachment: experiments and modelling

    SciTech Connect

    Wischmeier, W; Groth, M; Kallenbach, A; Chankin, A; Coster, D; Dux, R; Herrmann, A; Muller, H; Pugno, R; Reiter, D; Scarabosio, A; Watkins, J; Team, T D; Team, A U

    2008-05-23

    A qualitative as well as quantitative evaluation of experimentally observed plasma parameters in the detached regime proves to be difficult for several tokamaks. A series of ohmic discharges have been performed in ASDEX Upgrade and DIII-D at similar as possible plasma parameters and at different line averaged densities, {bar n}{sub e}. The experimental data represent a set of well diagnosed discharges against which numerical simulations are compared. For the numerical modeling the fluid-code B2.5 coupled to the Monte Carlo neutrals transport code EIRENE is used. Only the combined enhancement of effects, such as geometry, drift terms, neutral conductance, increased radial transport and divertor target composition, explains a significant fraction of the experimentally observed asymmetries of the ion fluxes as a function of {bar n}{sub e} to the inner and outer target plates in ASDEX Upgrade. The relative importance of the mechanisms leading to detachment are different in DIII-D and ASDEX Upgrade.

  2. Characteristics of the tensile mechanical properties of fresh and dry forewings of beetles.

    PubMed

    Tuo, Wanyong; Chen, Jinxiang; Wu, Zhishen; Xie, Juan; Wang, Yong

    2016-08-01

    Based on a tensile experiment and observations by scanning electron microscopy (SEM), this study demonstrated the characteristics of the tensile mechanical properties of the fresh and dry forewings of two types of beetles. The results revealed obvious differences in the tensile fracture morphologies and characteristics of the tensile mechanical properties of fresh and dry forewings of Cybister tripunctatus Olivier and Allomyrina dichotoma. For fresh forewings of these two types of beetles, a viscous, flow-like, polymer matrix plastic deformation was observed on the fracture surfaces, with soft morphologies and many fibers being pulled out, whereas on the dry forewings, the tensile fracture surfaces were straightforward, and there were no features resembling those found on the fresh forewings. The fresh forewings exhibited a greater fracture strain than the dry forewings, which was caused by the relative slippage of hydroxyl inter-chain bonds due to the presence of water in the fibers and proteins in the fresh forewings. Our study is the first to demonstrate the phenomenon of sudden stress drops caused by the fracturing of the lower skin because the lower skin fractured before the forewings of A. dichotoma reached their ultimate tensile strength. We also investigated the reasons underlying this phenomenon. This research provides a much better understanding of the mechanical properties of beetle forewings and facilitates the correct selection of study objects for biomimetic materials and development of the corresponding applications.

  3. Differentiation of fresh and frozen-thawed fish samples using Raman spectroscopy coupled with chemometric analysis.

    PubMed

    Velioğlu, Hasan Murat; Temiz, Havva Tümay; Boyaci, Ismail Hakki

    2015-04-01

    The potential of Raman spectroscopy was investigated in terms of its capability to discriminate the species of the fish samples and determine their freshness according to the number of freezing/thawing cycles they exposed. Species discrimination analysis was carried out on sixty-four fish samples from six different species, namely horse mackerel (Trachurus trachurus), European anchovy (Engraulis encrasicolus), red mullet (Mullus surmuletus), Bluefish (Pomatamus saltatrix), Atlantic salmon (Salmo salar) and flying gurnard (Trigla lucerna). Afterwards, fish samples were exposed to different numbers of freezing/thawing cycles and separated into three batches, namely (i) fresh, (ii) once frozen-thawed (OF) and (iii) twice frozen-thawed (TF) samples, in order to perform the freshness analysis. Raman data collected were used as inputs for chemometric analysis, which enabled us to develop two main PCA models to successfully terminate the studies for both species discrimination and freshness determination analysis.

  4. A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments

    DOE PAGES

    Jones, O. S.; Callahan, D. A.; Cerjan, C. J.; ...

    2012-05-29

    A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less

  5. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    SciTech Connect

    Dryer, F.L.; Yetter, R.A.

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  6. Spectral ellipsometry of GaSb: Experiment and modelling

    SciTech Connect

    Charache, G.W.; Mu {tilde n}oz, M.; Wei, K.; Pollak, F.H.; Freeouf, J.L.

    1999-05-01

    The optical constants {epsilon}(E)[{equals}{epsilon}{sub 1}(E) + i{epsilon}{sub 2}(E)] of single crystal GaSb at 300K have been measured using spectral ellipsometry in the range of 0.3--5.3 eV. The {epsilon}(E) spectra displayed distinct structures associated with critical points (CPs) at E{sub 0}(direct gap), spin-orbit split E{sub 0} + {Delta}{sub 0} component, spin-orbit split (E{sub 1}), E{sub 1} + {Delta}{sub 1} and (E{sub 0}{prime}), E{sub 0}{prime} + {Delta}{sub 0}{prime} doublets, as well as E{sub 2}. The experimental data over the entire measured spectral range (after oxide removal) has been fit using the Holden model dielectric function [Phys.Rev.B 56, 4037 (1997)] based on the electronic energy-band structure near these CPs plus excitonic and band-to-band Coulomb enhancement effects at E{sub 0}, E{sub 0} + {Delta}{sub 0}and the E{sub 1}, E{sub 1} + {Delta}{sub 1} doublet. In addition to evaluating the energies of these various band-to-band CPs, information about the binding energy (R{sub 1}) of the two-dimensional exciton related to the E{sub 1}, E{sub 1} + {Delta}{sub 1} CPS was obtained. The value of R{sub 1} was in good agreement with effective mass/{rvec k} {center_dot} {rvec p} theory. The ability to evaluate R{sub 1} has important ramifications for recent first-principles band structure calculations which include exciton effects at E{sub 0}, E{sub 1}, and E{sub 2}.

  7. Experiments And Model Development For The Investigation Of Sooting And Radiation Effects In Microgravity Droplet Combustion

    NASA Technical Reports Server (NTRS)

    Yozgatligil, Ahmet; Choi, Mun Young; Dryer, Frederick L.; Kazakov, Andrei; Dobashi, Ritsu

    2003-01-01

    This study involves flight experiments (for droplets between 1.5 to 5 mm) and supportive ground-based experiments, with concurrent numerical model development and validation. The experiments involve two fuels: n-heptane, and ethanol. The diagnostic measurements include light extinction for soot volume fraction, two-wavelength pyrometry and thin-filament pyrometry for temperature, spectral detection for OH chemiluminescence, broadband radiometry for flame emission, and thermophoretic sampling with subsequent transmission electron microscopy for soot aerosol property calculations.

  8. Prenatal Experiences of Containment in the Light of Bion's Model of Container/Contained

    ERIC Educational Resources Information Center

    Maiello, Suzanne

    2012-01-01

    This paper explores the idea of possible proto-experiences of the prenatal child in the context of Bion's model of container/contained. The physical configuration of the embryo/foetus contained in the maternal uterus represents the starting point for an enquiry into the unborn child's possible experiences of its state of being contained in a…

  9. Test of the Behavioral Perspective Model in the Context of an E-Mail Marketing Experiment

    ERIC Educational Resources Information Center

    Sigurdsson, Valdimar; Menon, R. G. Vishnu; Sigurdarson, Johannes Pall; Kristjansson, Jon Skafti; Foxall, Gordon R.

    2013-01-01

    An e-mail marketing experiment based on the behavioral perspective model was conducted to investigate consumer choice. Conversion e-mails were sent to two groups from the same marketing database of registered consumers interested in children's books. The experiment was based on A-B-A-C-A and A-C-A-B-A withdrawal designs and consisted of sending B…

  10. Guided-Inquiry Experiments for Physical Chemistry: The POGIL-PCL Model

    ERIC Educational Resources Information Center

    Hunnicutt, Sally S.; Grushow, Alexander; Whitnell, Robert

    2015-01-01

    The POGIL-PCL project implements the principles of process-oriented, guided-inquiry learning (POGIL) in order to improve student learning in the physical chemistry laboratory (PCL) course. The inquiry-based physical chemistry experiments being developed emphasize modeling of chemical phenomena. In each experiment, students work through at least…

  11. The Impact of Sexual Orientation on Women's Midlife Experience: A Transition Model Approach

    ERIC Educational Resources Information Center

    Boyer, Carol Anderson

    2007-01-01

    Sexual orientation is an integral part of identity affecting every stage of an individual's development. This literature review examines women's cultural experiences based on sexual orientation and their effect on midlife experience. A developmental model is offered that incorporates sexual orientation as a contextual factor in this developmental…

  12. Modeling Valuations from Experience: A Comment on Ashby and Rakow (2014)

    ERIC Educational Resources Information Center

    Wulff, Dirk U.; Pachur, Thorsten

    2016-01-01

    What are the cognitive mechanisms underlying subjective valuations formed on the basis of sequential experiences of an option's possible outcomes? Ashby and Rakow (2014) have proposed a sliding window model (SWIM), according to which people's valuations represent the average of a limited sample of recent experiences (the size of which is estimated…

  13. Children's Experiences of Disability: Pointers to a Social Model of Childhood Disability

    ERIC Educational Resources Information Center

    Connors, Clare; Stalker, Kirsten

    2007-01-01

    The social model of disability has paid little attention to disabled children, with few attempts to explore how far it provides an adequate explanatory framework for their experiences. This paper reports findings from a two-year study exploring the lived experiences of 26 disabled children aged 7-15. They experienced disability in four ways--in…

  14. The Development of a Conceptual Model of Student Satisfaction with Their Experience in Higher Education

    ERIC Educational Resources Information Center

    Douglas, Jacqueline; McClelland, Robert; Davies, John

    2008-01-01

    Purpose: The purpose of this paper is to introduce a conceptual model of student satisfaction with their higher education (HE) experience, based on the identification of the variable determinants of student perceived quality and the impact of those variables on student satisfaction and/or dissatisfaction with the overall student experience. The…

  15. Brain tumor imaging of rat fresh tissue using terahertz spectroscopy

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Sayuri; Fukushi, Yasuko; Kubota, Oichi; Itsuji, Takeaki; Ouchi, Toshihiko; Yamamoto, Seiji

    2016-07-01

    Tumor imaging by terahertz spectroscopy of fresh tissue without dye is demonstrated using samples from a rat glioma model. The complex refractive index spectrum obtained by a reflection terahertz time-domain spectroscopy system can discriminate between normal and tumor tissues. Both the refractive index and absorption coefficient of tumor tissues are higher than those of normal tissues and can be attributed to the higher cell density and water content of the tumor region. The results of this study indicate that terahertz technology is useful for detecting brain tumor tissue.

  16. Brain tumor imaging of rat fresh tissue using terahertz spectroscopy

    PubMed Central

    Yamaguchi, Sayuri; Fukushi, Yasuko; Kubota, Oichi; Itsuji, Takeaki; Ouchi, Toshihiko; Yamamoto, Seiji

    2016-01-01

    Tumor imaging by terahertz spectroscopy of fresh tissue without dye is demonstrated using samples from a rat glioma model. The complex refractive index spectrum obtained by a reflection terahertz time-domain spectroscopy system can discriminate between normal and tumor tissues. Both the refractive index and absorption coefficient of tumor tissues are higher than those of normal tissues and can be attributed to the higher cell density and water content of the tumor region. The results of this study indicate that terahertz technology is useful for detecting brain tumor tissue. PMID:27456312

  17. Mathematical models as aids for design and development of experiments: the case of transgenic mosquitoes.

    PubMed

    Robert, Michael A; Legros, Mathieu; Facchinelli, Luca; Valerio, Laura; Ramsey, Janine M; Scott, Thomas W; Gould, Fred; Lloyd, Alun L

    2012-11-01

    We demonstrate the utility of models as aids in the design and development of experiments aimed at measuring the effects of proposed vector population control strategies. We describe the exploration of a stochastic, age-structured model that simulates field cage experiments that test the ability of a female-killing strain of the mosquito Aedes aegypti (L.) to suppress a wild-type population. Model output predicts that choices of release ratio and population size can impact mean extinction time and variability in extinction time among experiments. We find that unless fitness costs are >80% they will not be detectable in experiments with high release ratios. At lower release ratios, the predicted length of the experiment increases significantly for fitness costs >20%. Experiments with small populations may more accurately reflect field conditions, but extinction can occur even in the absence of a functional female-killing construct because of stochastic effects. We illustrate how the model can be used to explore experimental designs that aim to study the impact of density dependence and immigration; predictions indicate that cage population eradication may not always be obtainable in an operationally realistic time frame. We propose a method to predict the extinction time of a cage population based on the rate of population reduction with the goal of shortening the duration of the experiment. We discuss the model as a tool for exploring and assessing the utility of a wider range of scenarios than would be feasible to test experimentally because of financial and temporal restraints.

  18. Forecast experiments with the Community Atmospheric Model (CAM) over the Tropical Warm Pool - International Cloud Experiment (TWP-ICE)

    NASA Astrophysics Data System (ADS)

    Boyle, J. S.; Klein, S.

    2008-12-01

    The Tropical Warm Pool International Cloud Experiment (TWP-ICE) experiment took place over 20 January 2006 to 14 February 2006 centered on Darwin, Australia. The motivation behind the design of the observations for TWP-ICE was to better understand the factors that control tropical convection. Additionally, the experiment sought to describe how the characteristics of the convection affect the microphysics of the clouds, particularly the deep convective anvils and tropical cirrus. A chief goal of the TWP-ICE was to provide information of the tropical processes for the improvement of the parameterization of clouds in numerical weather prediction (NWP) and climate models. The TWP-ICE experiment combined aspects of previous observational campaigns, specifically the combination of a dense rawindsonde network, high altitude aircraft sampling and airborne and ground based radar and lidar also observations from geostationary and polar obrbiting satellites were used. The CAM experiments consisted of changing the cloud microphysics parameterizations and running with three different horizontal resolutions. The CAM simulations were performed using the finite volume dynamical core with grids of 1.9° x 2.5°, 0.9° x 1.25° and 0.47° x 0.63°. The cloud microphysics parameterizations used were the default CAM single moment scheme and a new double moment parameterization that predicts both the number and mass of liquid and ice condensate.The model was initialized by the state variables (wind, temperature, moisture and surface pressure) taken from the ECMWF operational analyses. The forecasts are for 5 days starting at 00Z. The results presented will focus on the short term day 1 ( 24-48h) of the forecasts. The validation of cloud properities requires the coordination of several different observational platforms, including the a millimeter Cloud radar and microwave radiometer at Darwin as well as rawindsondes. The new microphysics scheme produces better estimates of the cloud

  19. A general model-based design of experiments approach to achieve practical identifiability of pharmacokinetic and pharmacodynamic models.

    PubMed

    Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio

    2013-08-01

    The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.

  20. A revised model for microbially induced calcite precipitation: Improvements and new insights based on recent experiments

    NASA Astrophysics Data System (ADS)

    Hommel, Johannes; Lauchnor, Ellen; Phillips, Adrienne; Gerlach, Robin; Cunningham, Alfred B.; Helmig, Rainer; Ebigbo, Anozie; Class, Holger

    2015-05-01

    The model for microbially induced calcite precipitation (MICP) published by Ebigbo et al. (2012) has been improved based on new insights obtained from experiments and model calibration. The challenge in constructing a predictive model for permeability reduction in the underground with MICP is the quantification of the complex interaction between flow, transport, biofilm growth, and reaction kinetics. New data from Lauchnor et al. (2015) on whole-cell ureolysis kinetics from batch experiments were incorporated into the model, which has allowed for a more precise quantification of the relevant parameters as well as a simplification of the reaction kinetics in the equations of the model. Further, the model has been calibrated objectively by inverse modeling using quasi-1D column experiments and a radial flow experiment. From the postprocessing of the inverse modeling, a comprehensive sensitivity analysis has been performed with focus on the model input parameters that were fitted in the course of the model calibration. It reveals that calcite precipitation and concentrations of NH4+ and Ca2+ are particularly sensitive to parameters associated with the ureolysis rate and the attachment behavior of biomass. Based on the determined sensitivities and the ranges of values for the estimated parameters in the inversion, it is possible to identify focal areas where further research can have a high impact toward improving the understanding and engineering of MICP.