Sample records for ascale model experiment

  1. ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.

    ERIC Educational Resources Information Center

    Vale, C. David; Gialluca, Kathleen A.

    ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…

  2. Ethical Failure and Its Operational Cost

    DTIC Science & Technology

    2011-12-01

    44 Appendix 1: The Kohlberg Scale of Moral Development...has a significant impact on the choices individuals make.50 Psychologist Lawrence Kohlberg looked extensively into moral reasoning and developed a...scale to discern between the different levels of moral reasoning found in individuals. The Kohlberg Scale of Moral Development51 identifies three

  3. Effects of applied pressure on hot-pressing of Beta-SiC

    NASA Technical Reports Server (NTRS)

    Kinoshita, M.; Matsumura, H.; Iwasa, M.; Hayami, R.

    1984-01-01

    The effects of applied pressure on the densification during hot pressing of beta-SiC compacts were investigated. Beta-SiC powder is Starck made and has the average particle size of about 0.7 micrometer. Hot pressing experiments were carried out in graphite dies at temperatures of 1700 deg to 2300 deg C and at the pressures up to 1000 kg/sq cm. The compacts containing 1 weight percent B4C were examined. Sintered compacts were examined for microstructure and the Rockwell A-scale hardness was measured. The B4C addition was very effective to mitigate the hot pressing conditions. It is found that densification goes with the strengthening of the bonding and does not occur in particle deformation due to concentrated stress.

  4. RF-Trapped Chip Scale Helium Ion Pump (RFT-CHIP)

    DTIC Science & Technology

    2016-04-06

    14. ABSTRACT A miniaturized (~1 cc) magnet -less RF electron trap for a helium ion pump is studied, addressing challenges associated with active...pump, ion pump, electron trap, magnet -less, MEMS, radiofrequency 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a...scale ion pumps. The Penning cell structure consists of three electrodes (an anode and two cathodes) and a magnet . Planar titanium cathodes are

  5. Surface Meteorology over the GATE A-Scale.

    DTIC Science & Technology

    1978-11-01

    discn\\uicywill ~al tlit’ rvf 1.vt etA ill the Illatilhi iite ti t t’ ;ui, t ~ wk, s i i1 th li, .,;t iuikit vs ot the cii I (it t l1k’ witii .t .i I-Ne...their results with tlxese obtaiinedl usilvj tdk Milk t±\\imnxcMethxid, itbasiniww and cT~r (1977) ncinxndtx th-at the drai coefficienit C" for iistilrbx

  6. Specific Air Pollutants from Munitions Processing and Their Atmospheric Behavior. Volume 3. TNT Production.

    DTIC Science & Technology

    1978-01-01

    o-t / \\ ~~~~~~~~~~~~~~~ / uL~~~~~~ > \\ ) F L O Y D co / - : ~~~~~~~ FLOY D / \\.C -\\ ‘ 221 \\ J 0 5 10 \\ .1 ASCALE IN MIL ES . \\. r...through the blower. In the primary catch tank , gravity separation allows some of the water to be returned to the venturi scrubber. The remaining pink

  7. On the Pricing of American Options.

    DTIC Science & Technology

    1986-05-01

    in the construction of a " hedging portfolio" (section 4). In particular, (2.7) and (2.8) imply n-d, i.e., that there exist exactly as many stocks as...called hedging property of the portfolio; we impose it by postulating that the process An t () n t t i At I f.(s)dX + If7o(s)X ii(s)ds fgds +V (3.7) i...European claim: the gains from the portfolio and the gains from the claim should coincide, so that no arbitrage opportunities could exist. Equivalently

  8. The temporal response of recombination events to gamma radiation of meiotic cells in Sordaria brevicollis.

    PubMed

    Lewis, L A

    1982-01-01

    The temporal frequencies of different stages of prophase I were determined cytologically in Sordaria brevicollis (Olive and Fantini) as the basis for ascertaining the degree of synchrony in meiosis in this ascomycete. Croziers, karyogamy-zygotene and pachytene asci were shown to be in significant majorities at three distinct periods of the meiotic cycle. The response of recombination frequency to ionizing radiation was examined for the entire meiotic cycle. Three radiosensitive periods were determined. This response, which correlated temporally with each of the three peaks in ascal frequency, is interpreted as showing that the meiotic cycle of this organism is divided into periods of recombination commitment (radiation reduced frequencies) during the pre-meiotic S phase and recombination consummation (radiation induced frequencies) during zygotene and pachytene. The results are discussed in the context of the time at which recombination is consummated in eukaryotes such as yeast and Drosophila.

  9. FCET2EC (From controlled experimental trial to = 2 everyday communication): How effective is intensive integrative therapy for stroke-induced chronic aphasia under routine clinical conditions? A study protocol for a randomized controlled trial.

    PubMed

    Baumgaertner, Annette; Grewe, Tanja; Ziegler, Wolfram; Floel, Agnes; Springer, Luise; Martus, Peter; Breitenstein, Caterina

    2013-09-23

    Therapy guidelines recommend speech and language therapy (SLT) as the "gold standard" for aphasia treatment. Treatment intensity (i.e., ≥5 hours of SLT per week) is a key predictor of SLT outcome. The scientific evidence to support the efficacy of SLT is unsatisfactory to date given the lack of randomized controlled trials (RCT), particularly with respect to chronic aphasia (lasting for >6 months after initial stroke). This randomized waiting list-controlled multi-centre trial examines whether intensive integrative language therapy provided in routine in- and outpatient clinical settings is effective in improving everyday communication in chronic post-stroke aphasia. Participants are men and women aged 18 to 70 years, at least 6 months post an ischemic or haemorrhagic stroke resulting in persisting language impairment (i.e., chronic aphasia); 220 patients will be screened for participation, with the goal of including at least 126 patients during the 26-month recruitment period. Basic language production and comprehension abilities need to be preserved (as assessed by the Aachen Aphasia Test).Therapy consists of language-systematic and communicative-pragmatic exercises for at least 2 hours/day and at least 10 hours/week, plus at least 1 hour self-administered training per day, for at least three weeks. Contents of therapy are adapted to patients' individual impairment profiles.Prior to and immediately following the therapy/waiting period, patients' individual language abilities are assessed via primary and secondary outcome measures. The primary (blinded) outcome measure is the A-scale (informational content, or 'understandability', of the message) of the Amsterdam-Nijmegen Everyday Language Test (ANELT), a standardized measure of functional communication ability. Secondary (unblinded) outcome measures are language-systematic and communicative-pragmatic language screenings and questionnaires assessing life quality as viewed by the patient as well as a relative.The primary analysis tests for differences between the therapy group and an untreated (waiting list) control group with respect to pre- versus post 3-week-therapy (or waiting period, respectively) scores on the ANELT A-scale. Statistical between-group comparisons of primary and secondary outcome measures will be conducted in intention-to-treat analyses.Long-term stability of treatment effects will be assessed six months post intensive SLT (primary and secondary endpoints). Registered in ClinicalTrials.gov with the Identifier NCT01540383.

  10. Differentiation of involved and uninvolved psoriatic skin from healthy skin using noninvasive visual, colorimeter and evaporimeter methods.

    PubMed

    Pershing, L K; Bakhtian, S; Wright, E D; Rallis, T M

    1995-08-01

    Uninvolved skin of psoriasis may not be entirely normal. The object was to characterize healthy, uninvolved psoriatic skin and lesional skin by biophysical methods. Involved and uninvolved psoriatic and age-gender matched healthy skin was measured objectively with a colorimeter and evaporimeter and subjectively with visual assessment in 14 subjects. Visual assessment of erythema (E), scaling (S) and induration (I) as well as the target lesion score at the involved psoriatic skin sites were significantly elevated (p<0.05) above uninvolved psoriatic or healthy skin sites. No difference between uninvolved psoriatic and healthy skin was measured visually. Transepidermal water loss at involved psoriatic skin >uninvolved psoriatic skin >healthy skin (p<0.05). Objective assessment of skin color in 3 color scales, L*, a*, and b*, differentiated involved and uninvolved psoriatic skin from healthy skin sites. Involved psoriatic skin demonstrated higher (p<0.01) a-scale values and lower (p<0.01) L* and b* scale values than uninvolved psoriatic skin. Further, colorimeter L* and a* scale values at uninvolved psoriatic skin sites were lower and higher (p<0.05), respectively, than healthy skin. The individual chromameter parameters (L*, a*, b*) correlated well with the visual parameters (E, S and I). Composite colorimeter description (L*× b*)/a* significantly differentiated healthy skin from both involved and uninvolved psoriatic skin. These collective data highlight that even visually appearing uninvolved psoriatic skin is compromised compared with healthy skin. These objective, noninvasive but differential capabilities of the colorimeter and evaporimeter will aid in the mechanistic quantification of new psoriatic drug therapies and in conjuction with biochemical studies, add to understanding of the multifactorial pathogenesis of psoriasis.

  11. Neutrons and gamma-rays spectroscopy of Mercury surface: global mapping from ESA MPO-BepiColombo spacecraft by MGNS instrument.

    NASA Astrophysics Data System (ADS)

    Kozyrev, A. S.; Gurvits, L. I.; Litvak, M. L.; Malakhov, A. A.; Mokrousov, M. I.; Mitrofanov, I. G.; Rogozhin, A. A.; Sanin, A. B.; Owens, A.; Schvetsov, V. N.

    2009-04-01

    For analyse chemistry composition of Mercury subsurface we will apply method of as-called remote sensing of neutrons. This method can be use for study celestial body of Solar system without thick atmospheres, like Moon, Mars, Phobos, Mercury etc. by the analysis of induced nuclear gamma-rays and neutron emission. These gamma-rays and neutrons are produced by energetic galactic cosmic rays colliding with nuclei of regolith within a 1-2 meter layer of subsurface. Mercury Planetary Orbiter of BepiColombo mission includes the nuclear instrument MGNS (Mercury Gamma-rays and Neutrons Spectrometers), which consists of gamma-rays spectrometer for detection of gamma-ray lines and neutron spectrometer for measurement of the neutron leakage flux. To test know theoretical models of Mercury composition, MGNS will provide the data for the set of gamma-ray lines, which are necessary and sufficient to discriminate between the models. Neutron data are known to be very sensitive for the presence of hydrogen within heavy soil-constituting elements. Mapping measurements of epithermal neutrons and 2.2 MeV line will allow us to study the content of hydrogen over the surface of Mercury and to test the presence of water ice deposits in the cold traps of permanently shadowed polar craters of this planet. There are also three natural radioactive elements, K, Th and U, which contents in the soil of a celestial body characterizes the physical condition of its formation in the proto-planetary cloud. The data from gamma-spectrometer will allow to compare the origin of Mercury with evolution of Earth, Moon and Mars. Three sensors for thermal and epithermal neutrons are made with similar 3He proportional counters, but have different polyethylene enclosures and cadmium shielding for different sensitivity of thermal and epithermal neutrons at different energy ranges. The fourth neutron sensor for high energy neutrons 1-10 MeV contains the scintillation crystal of stylbene with cylindrical shape of size Ø30Ã-40 cm. The gamma-rays spectrometer contains scintillation crystal of LaBr3 for detection of gamma-ray photons with very high spectral resolution of 3 % at 662 keV. The total mass of MGNS instrument is 5.2 kg; it consumes 4.0 W of power and provides about 9.0 Mb of telemetry data per day. At present, the nuclear instrument MGNS is under development for implementation on the MPO of BepiColombo mission, as the contribution of Federal Space Agency of Russia to this ESA project. In comparison of gamma-rays spectrometer onboard NASA's Messenger interplanetary probe, whitch will provide mapping data for northern hemisphere of the planet only because of elliptical orbit, the MGNS onboard MPO will provide global mapping of the planet with similar coverage of southern and northern hemispheres of the Mercury.

  12. Accelerating the connection between experiments and models: The FACE-MDS experience

    NASA Astrophysics Data System (ADS)

    Norby, R. J.; Medlyn, B. E.; De Kauwe, M. G.; Zaehle, S.; Walker, A. P.

    2014-12-01

    The mandate is clear for improving communication between models and experiments to better evaluate terrestrial responses to atmospheric and climatic change. Unfortunately, progress in linking experimental and modeling approaches has been slow and sometimes frustrating. Recent successes in linking results from the Duke and Oak Ridge free-air CO2 enrichment (FACE) experiments with ecosystem and land surface models - the FACE Model-Data Synthesis (FACE-MDS) project - came only after a period of slow progress, but the experience points the way to future model-experiment interactions. As the FACE experiments were approaching their termination, the FACE research community made an explicit attempt to work together with the modeling community to synthesize and deliver experimental data to benchmark models and to use models to supply appropriate context for the experimental results. Initial problems that impeded progress were: measurement protocols were not consistent across different experiments; data were not well organized for model input; and parameterizing and spinning up models that were not designed for simulating a specific site was difficult. Once these problems were worked out, the FACE-MDS project has been very successful in using data from the Duke and ORNL FACE experiment to test critical assumptions in the models. The project showed, for example, that the stomatal conductance model most widely used in models was supported by experimental data, but models did not capture important responses such as increased leaf mass per unit area in elevated CO2, and did not appropriately represent foliar nitrogen allocation. We now have an opportunity to learn from this experience. New FACE experiments that have recently been initiated, or are about to be initiated, include a eucalyptus forest in Australia; the AmazonFACE experiment in a primary, tropical forest in Brazil; and a mature oak woodland in England. Cross-site science questions are being developed that will have a strong modeling framework, and modelers and experimentalists will work to establish common measurement protocols and data format. By starting the model-experiment connection early and learning from our past experiences, we expect to significantly shorten the time lags between advances in process-oriented studies and large-scale models.

  13. Pliocene Model Intercomparison Project (PlioMIP): Experimental Design and Boundary Conditions (Experiment 2)

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  14. Pliocene Model Intercomparison Project (PlioMIP): experimental design and boundary conditions (Experiment 2)

    USGS Publications Warehouse

    Haywood, A.M.; Dowsett, H.J.; Robinson, M.M.; Stoll, D.K.; Dolan, A.M.; Lunt, D.J.; Otto-Bliesner, B.; Chandler, M.A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere-only climate models. The second (Experiment 2) utilises fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  15. Development of the Play Experience Model to Enhance Desirable Qualifications of Early Childhood

    ERIC Educational Resources Information Center

    Panpum, Watchara; Soonthornrojana, Wimonrat; Nakunsong, Thatsanee

    2015-01-01

    The objectives of this research were to develop the play experience model and to study the effect of usage in play experience model for enhancing the early childhood's desirable qualification. There were 3 phases of research: 1) the document and context in experience management were studied, 2) the play experience model was developed, and 3) the…

  16. A Generalized Quantum-Inspired Decision Making Model for Intelligent Agent

    PubMed Central

    Loo, Chu Kiong

    2014-01-01

    A novel decision making for intelligent agent using quantum-inspired approach is proposed. A formal, generalized solution to the problem is given. Mathematically, the proposed model is capable of modeling higher dimensional decision problems than previous researches. Four experiments are conducted, and both empirical experiments results and proposed model's experiment results are given for each experiment. Experiments showed that the results of proposed model agree with empirical results perfectly. The proposed model provides a new direction for researcher to resolve cognitive basis in designing intelligent agent. PMID:24778580

  17. Intensive speech and language therapy in patients with chronic aphasia after stroke: a randomised, open-label, blinded-endpoint, controlled trial in a health-care setting.

    PubMed

    Breitenstein, Caterina; Grewe, Tanja; Flöel, Agnes; Ziegler, Wolfram; Springer, Luise; Martus, Peter; Huber, Walter; Willmes, Klaus; Ringelstein, E Bernd; Haeusler, Karl Georg; Abel, Stefanie; Glindemann, Ralf; Domahs, Frank; Regenbrecht, Frank; Schlenck, Klaus-Jürgen; Thomas, Marion; Obrig, Hellmuth; de Langen, Ernst; Rocker, Roman; Wigbers, Franziska; Rühmkorf, Christina; Hempen, Indra; List, Jonathan; Baumgaertner, Annette

    2017-04-15

    Treatment guidelines for aphasia recommend intensive speech and language therapy for chronic (≥6 months) aphasia after stroke, but large-scale, class 1 randomised controlled trials on treatment effectiveness are scarce. We aimed to examine whether 3 weeks of intensive speech and language therapy under routine clinical conditions improved verbal communication in daily-life situations in people with chronic aphasia after stroke. In this multicentre, parallel group, superiority, open-label, blinded-endpoint, randomised controlled trial, patients aged 70 years or younger with aphasia after stroke lasting for 6 months or more were recruited from 19 inpatient or outpatient rehabilitation centres in Germany. An external biostatistician used a computer-generated permuted block randomisation method, stratified by treatment centre, to randomly assign participants to either 3 weeks or more of intensive speech and language therapy (≥10 h per week) or 3 weeks deferral of intensive speech and language therapy. The primary endpoint was between-group difference in the change in verbal communication effectiveness in everyday life scenarios (Amsterdam-Nijmegen Everyday Language Test A-scale) from baseline to immediately after 3 weeks of treatment or treatment deferral. All analyses were done using the modified intention-to-treat population (those who received 1 day or more of intensive treatment or treatment deferral). This study is registered with ClinicalTrials.gov, number NCT01540383. We randomly assigned 158 patients between April 1, 2012, and May 31, 2014. The modified intention-to-treat population comprised 156 patients (78 per group). Verbal communication was significantly improved from baseline to after intensive speech and language treatment (mean difference 2·61 points [SD 4·94]; 95% CI 1·49 to 3·72), but not from baseline to after treatment deferral (-0·03 points [4·04]; -0·94 to 0·88; between-group difference Cohen's d 0·58; p=0·0004). Eight patients had adverse events during therapy or treatment deferral (one car accident [in the control group], two common cold [one patient per group], three gastrointestinal or cardiac symptoms [all intervention group], two recurrent stroke [one in intervention group before initiation of treatment, and one before group assignment had occurred]); all were unrelated to study participation. 3 weeks of intensive speech and language therapy significantly enhanced verbal communication in people aged 70 years or younger with chronic aphasia after stroke, providing an effective evidence-based treatment approach in this population. Future studies should examine the minimum treatment intensity required for meaningful treatment effects, and determine whether treatment effects cumulate over repeated intervention periods. German Federal Ministry of Education and Research and the German Society for Aphasia Research and Treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Design and Analysis of AN Static Aeroelastic Experiment

    NASA Astrophysics Data System (ADS)

    Hou, Ying-Yu; Yuan, Kai-Hua; Lv, Ji-Nan; Liu, Zi-Qiang

    2016-06-01

    Static aeroelastic experiments are very common in the United States and Russia. The objective of static aeroelastic experiments is to investigate deformation and loads of elastic structure in flow field. Generally speaking, prerequisite of this experiment is that the stiffness distribution of structure is known. This paper describes a method for designing experimental models, in the case where the stiffness distribution and boundary condition of a real aircraft are both uncertain. The stiffness distribution form of the structure can be calculated via finite element modeling and simulation calculation and F141 steels and rigid foam are used to make elastic model. In this paper, the design and manufacturing process of static aeroelastic models is presented and a set of experiment model was designed to simulate the stiffness of the designed wings, a set of experiments was designed to check the results. The test results show that the experimental method can effectively complete the design work of elastic model. This paper introduces the whole process of the static aeroelastic experiment, and the experimental results are analyzed. This paper developed a static aeroelasticity experiment technique and established an experiment model targeting at the swept wing of a certain kind of large aspect ratio aircraft.

  19. INDOOR AIR QUALITY MODELING (CHAPTER 58)

    EPA Science Inventory

    The chapter discussses indoor air quality (IAQ) modeling. Such modeling provides a way to investigate many IAQ problems without the expense of large field experiments. Where experiments are planned, IAQ models can be used to help design experiments by providing information on exp...

  20. The Hot Serial Cereal Experiment for modeling wheat response to temperature: field experiments and AgMIP-Wheat multi-model simulations

    USDA-ARS?s Scientific Manuscript database

    The data set reported here includes the part of a Hot Serial Cereal Experiment (HSC) experiment recently used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat models and quantify their response to temperature. The HSC experiment was conducted in an open-field in a semiarid environme...

  1. Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution

    NASA Astrophysics Data System (ADS)

    Holmes, Caroline M.; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya

    2017-10-01

    We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.

  2. Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution.

    PubMed

    Holmes, Caroline M; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya

    2017-08-21

    We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.

  3. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO2 Enrichment experiment

    NASA Astrophysics Data System (ADS)

    De Kauwe, M. G.; Medlyn, B.; Walker, A.; Zaehle, S.; Pendall, E.; Norby, R. J.

    2017-12-01

    Multifactor experiments are often advocated as important for advancing models, yet to date, such models have only been tested against single-factor experiments. We applied 10 models to the multifactor Prairie Heating and CO2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions: comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO2-induced water savings to extend the growing season length. Observed interactive (CO2 × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.

  4. Modeling Human Serum Albumin Tertiary Structure to Teach Upper-Division Chemistry Students Bioinformatics and Homology Modeling Basics

    ERIC Educational Resources Information Center

    Petrovic, Dus?an; Zlatovic´, Mario

    2015-01-01

    A homology modeling laboratory experiment has been developed for an introductory molecular modeling course for upper-division undergraduate chemistry students. With this experiment, students gain practical experience in homology model preparation and assessment as well as in protein visualization using the educational version of PyMOL…

  5. Mechanical testing of bones: the positive synergy of finite-element models and in vitro experiments.

    PubMed

    Cristofolini, Luca; Schileo, Enrico; Juszczyk, Mateusz; Taddei, Fulvia; Martelli, Saulo; Viceconti, Marco

    2010-06-13

    Bone biomechanics have been extensively investigated in the past both with in vitro experiments and numerical models. In most cases either approach is chosen, without exploiting synergies. Both experiments and numerical models suffer from limitations relative to their accuracy and their respective fields of application. In vitro experiments can improve numerical models by: (i) preliminarily identifying the most relevant failure scenarios; (ii) improving the model identification with experimentally measured material properties; (iii) improving the model identification with accurately measured actual boundary conditions; and (iv) providing quantitative validation based on mechanical properties (strain, displacements) directly measured from physical specimens being tested in parallel with the modelling activity. Likewise, numerical models can improve in vitro experiments by: (i) identifying the most relevant loading configurations among a number of motor tasks that cannot be replicated in vitro; (ii) identifying acceptable simplifications for the in vitro simulation; (iii) optimizing the use of transducers to minimize errors and provide measurements at the most relevant locations; and (iv) exploring a variety of different conditions (material properties, interface, etc.) that would require enormous experimental effort. By reporting an example of successful investigation of the femur, we show how a combination of numerical modelling and controlled experiments within the same research team can be designed to create a virtuous circle where models are used to improve experiments, experiments are used to improve models and their combination synergistically provides more detailed and more reliable results than can be achieved with either approach singularly.

  6. User's instructions for the GE cardiovascular model to simulate LBNP and tilt experiments, with graphic capabilities

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The present form of this cardiovascular model simulates both 1-g and zero-g LBNP (lower body negative pressure) experiments and tilt experiments. In addition, the model simulates LBNP experiments at any body angle. The model is currently accessible on the Univac 1110 Time-Shared System in an interactive operational mode. Model output may be in tabular form and/or graphic form. The graphic capabilities are programmed for the Tektronix 4010 graphics terminal and the Univac 1110.

  7. Modeling and Depletion Simulations for a High Flux Isotope Reactor Cycle with a Representative Experiment Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandler, David; Betzler, Ben; Hirtz, Gregory John

    2016-09-01

    The purpose of this report is to document a high-fidelity VESTA/MCNP High Flux Isotope Reactor (HFIR) core model that features a new, representative experiment loading. This model, which represents the current, high-enriched uranium fuel core, will serve as a reference for low-enriched uranium conversion studies, safety-basis calculations, and other research activities. A new experiment loading model was developed to better represent current, typical experiment loadings, in comparison to the experiment loading included in the model for Cycle 400 (operated in 2004). The new experiment loading model for the flux trap target region includes full length 252Cf production targets, 75Se productionmore » capsules, 63Ni production capsules, a 188W production capsule, and various materials irradiation targets. Fully loaded 238Pu production targets are modeled in eleven vertical experiment facilities located in the beryllium reflector. Other changes compared to the Cycle 400 model are the high-fidelity modeling of the fuel element side plates and the material composition of the control elements. Results obtained from the depletion simulations with the new model are presented, with a focus on time-dependent isotopic composition of irradiated fuel and single cycle isotope production metrics.« less

  8. Extended experience benefits spatial mental model development with route but not survey descriptions.

    PubMed

    Brunyé, Tad T; Taylor, Holly A

    2008-02-01

    Spatial descriptions symbolically represent environmental information through language and are written in two primary perspectives: survey, analogous to viewing a map, and route, analogous to navigation. Readers of survey or route descriptions form abstracted perspective flexible representations of the described environment, or spatial mental models. The present two experiments investigated the maintenance of perspective in spatial mental models as a function of description perspective and experience (operationalized through repetition), and as reflected in self-paced reading times. Experiment 1 involved studying survey and route descriptions either once or three times, then completing map drawing and true/false statement verification. Results demonstrated that spatial mental models are readily formed with survey descriptions, but require relatively more experience with route descriptions; further, some limited evidence suggests perspective dependence in spatial mental models, even following extended experience. Experiment 2 measured self-paced reading during three successive description presentations. Average reading times over the three presentations reduced more for survey relative to route descriptions, and there was no evidence for perspective specificity in resulting spatial mental models. This supports Experiment 1 findings demonstrating the relatively time-consuming nature of acquiring spatial mental models from route, but not survey descriptions. Results are discussed with regard to developmental, discourse processing, and spatial mental model theory.

  9. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less

  10. Piecewise exponential models to assess the influence of job-specific experience on the hazard of acute injury for hourly factory workers

    PubMed Central

    2013-01-01

    Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648

  11. A Computational Model for Observation in Quantum Mechanics.

    DTIC Science & Technology

    1987-03-16

    Interferometer experiment ............. 17 2.3 The EPR Paradox experiment ................. 22 3 The Computational Model, an Overview 28 4 Implementation 34...40 4.4 Code for the EPR paradox experiment ............... 46 4.5 Code for the double slit interferometer experiment ..... .. 50 5 Conclusions 59 A...particle run counter to fact. The EPR paradox experiment (see section 2.3) is hard to resolve with this class of models, collectively called hidden

  12. Luria-Delbrück Revisited: The Classic Experiment Doesn't Rule out Lamarckian Evolution

    NASA Astrophysics Data System (ADS)

    Holmes, Caroline; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya

    We re-examine data from the classic 1943 Luria-Delbruck fluctuation experiment. This experiment is often credited with establishing that phage resistance in bacteria is acquired through a Darwinian mechanism (natural selection on standing variation) rather than through a Lamarckian mechanism (environmentally induced mutations). We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms. Analysis of the combined model was not performed in the 1943 paper, and nor was analysis of the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that: 1) all datasets from the paper favor Darwinian over purely Lamarckian evolution, 2) some of the datasets are unable to distinguish between the purely Darwinian and the combined models, and 3) the other datasets cannot be explained by any of the models considered. In summary, the classic experiment cannot rule out Lamarckian contributions to the evolutionary dynamics. This work was supported by National Science Foundation Grant 1410978, NIH training Grant 5R90DA033462, and James S. McDonnell Foundation Grant 220020321.

  13. A Design for Composing and Extending Vehicle Models

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.; Neuhaus, Jason R.

    2003-01-01

    The Systems Development Branch (SDB) at NASA Langley Research Center (LaRC) creates simulation software products for research. Each product consists of an aircraft model with experiment extensions. SDB treats its aircraft models as reusable components, upon which experiments can be built. SDB has evolved aircraft model design with the following goals: 1. Avoid polluting the aircraft model with experiment code. 2. Discourage the copy and tailor method of reuse. The current evolution of that architecture accomplishes these goals by reducing experiment creation to extend and compose. The architecture mechanizes the operational concerns of the model's subsystems and encapsulates them in an interface inherited by all subsystems. Generic operational code exercises the subsystems through the shared interface. An experiment is thus defined by the collection of subsystems that it creates ("compose"). Teams can modify the aircraft subsystems for the experiment using inheritance and polymorphism to create variants ("extend").

  14. Pore-scale and continuum simulations of solute transport micromodel benchmark experiments

    DOE PAGES

    Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...

    2014-06-18

    Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less

  15. Design of the MISMIP+, ISOMIP+, and MISOMIP ice-sheet, ocean, and coupled ice sheet-ocean intercomparison projects

    NASA Astrophysics Data System (ADS)

    Asay-Davis, Xylar; Cornford, Stephen; Martin, Daniel; Gudmundsson, Hilmar; Holland, David; Holland, Denise

    2015-04-01

    The MISMIP and MISMIP3D marine ice sheet model intercomparison exercises have become popular benchmarks, and several modeling groups have used them to show how their models compare to both analytical results and other models. Similarly, the ISOMIP (Ice Shelf-Ocean Model Intercomparison Project) experiments have acted as a proving ground for ocean models with sub-ice-shelf cavities.As coupled ice sheet-ocean models become available, an updated set of benchmark experiments is needed. To this end, we propose sequel experiments, MISMIP+ and ISOMIP+, with an end goal of coupling the two in a third intercomparison exercise, MISOMIP (the Marine Ice Sheet-Ocean Model Intercomparison Project). Like MISMIP3D, the MISMIP+ experiments take place in an idealized, three-dimensional setting and compare full 3D (Stokes) and reduced, hydrostatic models. Unlike the earlier exercises, the primary focus will be the response of models to sub-shelf melting. The chosen configuration features an ice shelf that experiences substantial lateral shear and buttresses the upstream ice, and so is well suited to melting experiments. Differences between the steady states of each model are minor compared to the response to melt-rate perturbations, reflecting typical real-world applications where parameters are chosen so that the initial states of all models tend to match observations. The three ISOMIP+ experiments have been designed to to make use of the same bedrock topography as MISMIP+ and using ice-shelf geometries from MISMIP+ results produced by the BISICLES ice-sheet model. The first two experiments use static ice-shelf geometries to simulate the evolution of ocean dynamics and resulting melt rates to a quasi-steady state when far-field forcing changes in either from cold to warm or from warm to cold states. The third experiment prescribes 200 years of dynamic ice-shelf geometry (with both retreating and advancing ice) based on a BISICLES simulation along with similar flips between warm and cold states in the far-field ocean forcing. The MISOMIP experiment combines the MISMIP+ experiments with the third ISOMIP+ experiment. Changes in far-field ocean forcing lead to a rapid (over ~1-2 years) increase in sub-ice-shelf melting, which is allowed to drive ice-shelf retreat for ~100 years. Then, the far-field forcing is switched to a cold state, leading to a rapid decrease in melting and a subsequent advance over ~100 years. To illustrate, we present results from BISICLES and POP2x experiments for each of the three intercomparison exercises.

  16. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    PubMed

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.

  17. A call for virtual experiments: accelerating the scientific process.

    PubMed

    Cooper, Jonathan; Vik, Jon Olav; Waltemath, Dagmar

    2015-01-01

    Experimentation is fundamental to the scientific method, whether for exploration, description or explanation. We argue that promoting the reuse of virtual experiments (the in silico analogues of wet-lab or field experiments) would vastly improve the usefulness and relevance of computational models, encouraging critical scrutiny of models and serving as a common language between modellers and experimentalists. We review the benefits of reusable virtual experiments: in specifying, assaying, and comparing the behavioural repertoires of models; as prerequisites for reproducible research; to guide model reuse and composition; and for quality assurance in the translational application of models. A key step towards achieving this is that models and experimental protocols should be represented separately, but annotated so as to facilitate the linking of models to experiments and data. Lastly, we outline how the rigorous, streamlined confrontation between experimental datasets and candidate models would enable a "continuous integration" of biological knowledge, transforming our approach to systems biology. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations.

    PubMed

    Kooloos, Jan G M; Schepens-Franke, Annelieke N; Bergman, Esther M; Donders, Rogier A R T; Vorstenbosch, Marc A T M

    2014-01-01

    Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task. © 2014 American Association of Anatomists.

  19. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO2 Enrichment experiment.

    PubMed

    De Kauwe, Martin G; Medlyn, Belinda E; Walker, Anthony P; Zaehle, Sönke; Asao, Shinichi; Guenet, Bertrand; Harper, Anna B; Hickler, Thomas; Jain, Atul K; Luo, Yiqi; Lu, Xingjie; Luus, Kristina; Parton, William J; Shu, Shijie; Wang, Ying-Ping; Werner, Christian; Xia, Jianyang; Pendall, Elise; Morgan, Jack A; Ryan, Edmund M; Carrillo, Yolima; Dijkstra, Feike A; Zelikova, Tamara J; Norby, Richard J

    2017-09-01

    Multifactor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date, such models have only been tested against single-factor experiments. We applied 10 TBMs to the multifactor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2  yr -1 ). Comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the N cycle models, N availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO 2 -induced water savings to extend the growing season length. Observed interactive (CO 2  × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change. © 2017 John Wiley & Sons Ltd.

  20. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO 2 enrichment experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.

    Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less

  1. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO 2 enrichment experiment

    DOE PAGES

    De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.; ...

    2017-02-01

    Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less

  2. Reflecting on the challenges of building a rich interconnected metadata database to describe the experiments of phase six of the coupled climate model intercomparison project (CMIP6) for the Earth System Documentation Project (ES-DOC) and anticipating the opportunities that tooling and services based on rich metadata can provide.

    NASA Astrophysics Data System (ADS)

    Pascoe, C. L.

    2017-12-01

    The Coupled Model Intercomparison Project (CMIP) has coordinated climate model experiments involving multiple international modelling teams since 1995. This has led to a better understanding of past, present, and future climate. The 2017 sixth phase of the CMIP process (CMIP6) consists of a suite of common experiments, and 21 separate CMIP-Endorsed Model Intercomparison Projects (MIPs) making a total of 244 separate experiments. Precise descriptions of the suite of CMIP6 experiments have been captured in a Common Information Model (CIM) database by the Earth System Documentation Project (ES-DOC). The database contains descriptions of forcings, model configuration requirements, ensemble information and citation links, as well as text descriptions and information about the rationale for each experiment. The database was built from statements about the experiments found in the academic literature, the MIP submissions to the World Climate Research Programme (WCRP), WCRP summary tables and correspondence with the principle investigators for each MIP. The database was collated using spreadsheets which are archived in the ES-DOC Github repository and then rendered on the ES-DOC website. A diagramatic view of the workflow of building the database of experiment metadata for CMIP6 is shown in the attached figure.The CIM provides the formalism to collect detailed information from diverse sources in a standard way across all the CMIP6 MIPs. The ES-DOC documentation acts as a unified reference for CMIP6 information to be used both by data producers and consumers. This is especially important given the federated nature of the CMIP6 project. Because the CIM allows forcing constraints and other experiment attributes to be referred to by more than one experiment, we can streamline the process of collecting information from modelling groups about how they set up their models for each experiment. End users of the climate model archive will be able to ask questions enabled by the interconnectedness of the metadata such as "Which MIPs make use of experiment A?" and "Which experiments use forcing constraint B?".

  3. The effects of corona on current surges induced on conducting lines by EMP: A comparison of experiment data with results of analytic corona models

    NASA Astrophysics Data System (ADS)

    Blanchard, J. P.; Tesche, F. M.; McConnell, B. W.

    1987-09-01

    An experiment to determine the interaction of an intense electromagnetic pulse (EMP), such as that produced by a nuclear detonation above the Earth's atmosphere, was performed in March, 1986 at Kirtland Air Force Base near Albuquerque, New Mexico. The results of that experiment have been published without analysis. Following an introduction of the corona phenomenon, the reason for interest in it, and a review of the experiment, this paper discusses five different analytic corona models that may model corona formation on a conducting line subjected to EMP. The results predicted by these models are compared with measured data acquired during the experiment to determine the strengths and weaknesses of each model.

  4. Do prevailing societal models influence reports of near-death experiences?: a comparison of accounts reported before and after 1975.

    PubMed

    Athappilly, Geena K; Greyson, Bruce; Stevenson, Ian

    2006-03-01

    Transcendental near-death experiences show some cross-cultural variation that suggests they may be influenced by societal beliefs. The prevailing Western model of near-death experiences was defined by Moody's description of the phenomenon in 1975. To explore the influence of this cultural model, we compared near-death experience accounts collected before and after 1975. We compared the frequency of 15 phenomenological features Moody defined as characteristic of near-death experiences in 24 accounts collected before 1975 and in 24 more recent accounts matched on relevant demographic and situational variables. Near-death experience accounts collected after 1975 differed from those collected earlier only in increased frequency of tunnel phenomena, which other research has suggested may not be integral to the experience, and not in any of the remaining 14 features defined by Moody as characteristic of near-death experiences. These data challenge the hypothesis that near-death experience accounts are substantially influenced by prevailing cultural models.

  5. A business process modeling experience in a complex information system re-engineering.

    PubMed

    Bernonville, Stéphanie; Vantourout, Corinne; Fendeler, Geneviève; Beuscart, Régis

    2013-01-01

    This article aims to share a business process modeling experience in a re-engineering project of a medical records department in a 2,965-bed hospital. It presents the modeling strategy, an extract of the results and the feedback experience.

  6. Modeling a High Explosive Cylinder Experiment

    NASA Astrophysics Data System (ADS)

    Zocher, Marvin A.

    2017-06-01

    Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.

  7. EURODELTA-Trends, a multi-model experiment of air quality hindcast in Europe over 1990-2010

    NASA Astrophysics Data System (ADS)

    Colette, Augustin; Andersson, Camilla; Manders, Astrid; Mar, Kathleen; Mircea, Mihaela; Pay, Maria-Teresa; Raffort, Valentin; Tsyro, Svetlana; Cuvelier, Cornelius; Adani, Mario; Bessagnet, Bertrand; Bergström, Robert; Briganti, Gino; Butler, Tim; Cappelletti, Andrea; Couvidat, Florian; D'Isidoro, Massimo; Doumbia, Thierno; Fagerli, Hilde; Granier, Claire; Heyes, Chris; Klimont, Zig; Ojha, Narendra; Otero, Noelia; Schaap, Martijn; Sindelarova, Katarina; Stegehuis, Annemiek I.; Roustan, Yelva; Vautard, Robert; van Meijgaard, Erik; Garcia Vivanco, Marta; Wind, Peter

    2017-09-01

    The EURODELTA-Trends multi-model chemistry-transport experiment has been designed to facilitate a better understanding of the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional-scale air quality. The present paper formulates the main scientific questions and policy issues being addressed by the EURODELTA-Trends modelling experiment with an emphasis on how the design and technical features of the modelling experiment answer these questions. The experiment is designed in three tiers, with increasing degrees of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000, and 2010. Sensitivity analysis for the same three years using various combinations of (i) anthropogenic emissions, (ii) chemical boundary conditions, and (iii) meteorology complements it. The most demanding tier consists of two complete time series from 1990 to 2010, simulated using either time-varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and five models have - to date - completed the full set of simulations (and 21-year trend calculations have been performed by four models). The modelling results are publicly available for further use by the scientific community. The main expected outcomes are (i) an evaluation of the models' performances for the three reference years, (ii) an evaluation of the skill of the models in capturing observed air pollution trends for the 1990-2010 time period, (iii) attribution analyses of the respective role of driving factors (e.g. emissions, boundary conditions, meteorology), (iv) a dataset based on a multi-model approach, to provide more robust model results for use in impact studies related to human health, ecosystem, and radiative forcing.

  8. CFD validation experiments at McDonnell Aircraft Company

    NASA Technical Reports Server (NTRS)

    Verhoff, August

    1987-01-01

    Information is given in viewgraph form on computational fluid dynamics (CFD) validation experiments at McDonnell Aircraft Company. Topics covered include a high speed research model, a supersonic persistence fighter model, a generic fighter wing model, surface grids, force and moment predictions, surface pressure predictions, forebody models with 65 degree clipped delta wings, and the low aspect ratio wing/body experiment.

  9. A model of clutter for complex, multivariate geospatial displays.

    PubMed

    Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L

    2009-02-01

    A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.

  10. Analysis and Modeling of Multistatic Clutter and Reverberation and Support for the FORA

    DTIC Science & Technology

    2015-09-30

    experiments, the 2014 Nordic Seas experiment. The PI’s technical objectives for the experiment are to characterize and model multistatic bottom clutter... Nordic Seas experiments, as well as other efforts as directed by ONR-OA. APPROACH There is a 6-year ONR OA plan for three large experiments

  11. Do STEM fields need a makeover?: The effect of role model femininity on men and women's interest in STEM

    NASA Astrophysics Data System (ADS)

    Howard, Carolynn

    Women continue to be underrepresented in science, technology, engineering, and mathematics (STEM) fields. This lack of women is problematic because it diminishes perspective, input, and expertise that women could provide. Consequently, this thesis examined the benefits of exposure to peer role models for increasing women's interest in STEM, which may ultimately lead more women to enter STEM fields. The role model research to date has amassed considerable evidence showing that role model exposure is beneficial; yet, questions still remain about what makes these role models effective. Accordingly, this thesis investigated whether feminine female role models increase women's interest in STEM and improve their perceptions of female STEM role models relative to "neutral" female role models. Across three experiments men and women were exposed to role models and their interest in STEM was measured. All experiments exposed participants to one of three articles about a peer role model (a female role model who embodies femininity (e.g. wears makeup), a female role model who has gender neutral qualities/behaviors [e.g., works hard], or a male role model who embodies neutral traits) and Experiments 2 and 3 had a fourth control condition in which participants read about the history of SDSU (a control condition). In the first two experiments interest in physics was measured using an adapted version of the STEM Career Interest Survey (CIS). Experiment 3 used an adapted version of the STEM CIS scale, but measured overall interest in STEM by including subscales for each of the four STEM areas with a composite score serving as the primary dependent variable. Experiments 1 and 2 demonstrated that women's interest in physics was no different than men's after exposure to a feminine female role model compared to a neutral female and neutral male role model. Furthermore, women's interest in physics was greater in the feminine condition compared to all other conditions for the first two experiments, thus demonstrating that a competent feminine role model may be useful in piquing women's interest in physics. Experiment 3, however, did not display this pattern for women's interest in STEM overall.

  12. Automatic reactor model synthesis with genetic programming.

    PubMed

    Dürrenmatt, David J; Gujer, Willi

    2012-01-01

    Successful modeling of wastewater treatment plant (WWTP) processes requires an accurate description of the plant hydraulics. Common methods such as tracer experiments are difficult and costly and thus have limited applicability in practice; engineers are often forced to rely on their experience only. An implementation of grammar-based genetic programming with an encoding to represent hydraulic reactor models as program trees should fill this gap: The encoding enables the algorithm to construct arbitrary reactor models compatible with common software used for WWTP modeling by linking building blocks, such as continuous stirred-tank reactors. Discharge measurements and influent and effluent concentrations are the only required inputs. As shown in a synthetic example, the technique can be used to identify a set of reactor models that perform equally well. Instead of being guided by experience, the most suitable model can now be chosen by the engineer from the set. In a second example, temperature measurements at the influent and effluent of a primary clarifier are used to generate a reactor model. A virtual tracer experiment performed on the reactor model has good agreement with a tracer experiment performed on-site.

  13. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    PubMed Central

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined. PMID:22172142

  14. Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2015-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.

  15. Logical fallacies in animal model research.

    PubMed

    Sjoberg, Espen A

    2017-02-15

    Animal models of human behavioural deficits involve conducting experiments on animals with the hope of gaining new knowledge that can be applied to humans. This paper aims to address risks, biases, and fallacies associated with drawing conclusions when conducting experiments on animals, with focus on animal models of mental illness. Researchers using animal models are susceptible to a fallacy known as false analogy, where inferences based on assumptions of similarities between animals and humans can potentially lead to an incorrect conclusion. There is also a risk of false positive results when evaluating the validity of a putative animal model, particularly if the experiment is not conducted double-blind. It is further argued that animal model experiments are reconstructions of human experiments, and not replications per se, because the animals cannot follow instructions. This leads to an experimental setup that is altered to accommodate the animals, and typically involves a smaller sample size than a human experiment. Researchers on animal models of human behaviour should increase focus on mechanistic validity in order to ensure that the underlying causal mechanisms driving the behaviour are the same, as relying on face validity makes the model susceptible to logical fallacies and a higher risk of Type 1 errors. We discuss measures to reduce bias and risk of making logical fallacies in animal research, and provide a guideline that researchers can follow to increase the rigour of their experiments.

  16. From theory to experimental design-Quantifying a trait-based theory of predator-prey dynamics.

    PubMed

    Laubmeier, A N; Wootton, Kate; Banks, J E; Bommarco, Riccardo; Curtsdotter, Alva; Jonsson, Tomas; Roslin, Tomas; Banks, H T

    2018-01-01

    Successfully applying theoretical models to natural communities and predicting ecosystem behavior under changing conditions is the backbone of predictive ecology. However, the experiments required to test these models are dictated by practical constraints, and models are often opportunistically validated against data for which they were never intended. Alternatively, we can inform and improve experimental design by an in-depth pre-experimental analysis of the model, generating experiments better targeted at testing the validity of a theory. Here, we describe this process for a specific experiment. Starting from food web ecological theory, we formulate a model and design an experiment to optimally test the validity of the theory, supplementing traditional design considerations with model analysis. The experiment itself will be run and described in a separate paper. The theory we test is that trophic population dynamics are dictated by species traits, and we study this in a community of terrestrial arthropods. We depart from the Allometric Trophic Network (ATN) model and hypothesize that including habitat use, in addition to body mass, is necessary to better model trophic interactions. We therefore formulate new terms which account for micro-habitat use as well as intra- and interspecific interference in the ATN model. We design an experiment and an effective sampling regime to test this model and the underlying assumptions about the traits dominating trophic interactions. We arrive at a detailed sampling protocol to maximize information content in the empirical data obtained from the experiment and, relying on theoretical analysis of the proposed model, explore potential shortcomings of our design. Consequently, since this is a "pre-experimental" exercise aimed at improving the links between hypothesis formulation, model construction, experimental design and data collection, we hasten to publish our findings before analyzing data from the actual experiment, thus setting the stage for strong inference.

  17. Simulations, Imaging, and Modeling: A Unique Theme for an Undergraduate Research Program in Biomechanics.

    PubMed

    George, Stephanie M; Domire, Zachary J

    2017-07-01

    As the reliance on computational models to inform experiments and evaluate medical devices grows, the demand for students with modeling experience will grow. In this paper, we report on the 3-yr experience of a National Science Foundation (NSF) funded Research Experiences for Undergraduates (REU) based on the theme simulations, imaging, and modeling in biomechanics. While directly applicable to REU sites, our findings also apply to those creating other types of summer undergraduate research programs. The objective of the paper is to examine if a theme of simulations, imaging, and modeling will improve students' understanding of the important topic of modeling, provide an overall positive research experience, and provide an interdisciplinary experience. The structure of the program and the evaluation plan are described. We report on the results from 25 students over three summers from 2014 to 2016. Overall, students reported significant gains in the knowledge of modeling, research process, and graduate school based on self-reported mastery levels and open-ended qualitative responses. This theme provides students with a skill set that is adaptable to other applications illustrating the interdisciplinary nature of modeling in biomechanics. Another advantage is that students may also be able to continue working on their project following the summer experience through network connections. In conclusion, we have described the successful implementation of the theme simulation, imaging, and modeling for an REU site and the overall positive response of the student participants.

  18. Modeling the Interplay Between Psychological Processes and Adverse, Stressful Contexts and Experiences in Pathways to Psychosis: An Experience Sampling Study

    PubMed Central

    Klippel, Annelie; Myin-Germeys, Inez; Chavez-Baldini, UnYoung; Preacher, Kristopher J.; Kempton, Matthew; Valmaggia, Lucia; Calem, Maria; So, Suzanne; Beards, Stephanie; Hubbard, Kathryn; Gayer-Anderson, Charlotte; Onyejiaka, Adanna; Wichers, Marieke; McGuire, Philip; Murray, Robin; Garety, Philippa; van Os, Jim; Wykes, Til; Morgan, Craig

    2017-01-01

    Abstract Several integrated models of psychosis have implicated adverse, stressful contexts and experiences, and affective and cognitive processes in the onset of psychosis. In these models, the effects of stress are posited to contribute to the development of psychotic experiences via pathways through affective disturbance, cognitive biases, and anomalous experiences. However, attempts to systematically test comprehensive models of these pathways remain sparse. Using the Experience Sampling Method in 51 individuals with first-episode psychosis (FEP), 46 individuals with an at-risk mental state (ARMS) for psychosis, and 53 controls, we investigated how stress, enhanced threat anticipation, and experiences of aberrant salience combine to increase the intensity of psychotic experiences. We fitted multilevel moderated mediation models to investigate indirect effects across these groups. We found that the effects of stress on psychotic experiences were mediated via pathways through affective disturbance in all 3 groups. The effect of stress on psychotic experiences was mediated by threat anticipation in FEP individuals and controls but not in ARMS individuals. There was only weak evidence of mediation via aberrant salience. However, aberrant salience retained a substantial direct effect on psychotic experiences, independently of stress, in all 3 groups. Our findings provide novel insights on the role of affective disturbance and threat anticipation in pathways through which stress impacts on the formation of psychotic experiences across different stages of early psychosis in daily life. PMID:28204708

  19. Virtual experiments, physical validation: dental morphology at the intersection of experiment and theory

    PubMed Central

    Anderson, P. S. L.; Rayfield, E. J.

    2012-01-01

    Computational models such as finite-element analysis offer biologists a means of exploring the structural mechanics of biological systems that cannot be directly observed. Validated against experimental data, a model can be manipulated to perform virtual experiments, testing variables that are hard to control in physical experiments. The relationship between tooth form and the ability to break down prey is key to understanding the evolution of dentition. Recent experimental work has quantified how tooth shape promotes fracture in biological materials. We present a validated finite-element model derived from physical compression experiments. The model shows close agreement with strain patterns observed in photoelastic test materials and reaction forces measured during these experiments. We use the model to measure strain energy within the test material when different tooth shapes are used. Results show that notched blades deform materials for less strain energy cost than straight blades, giving insights into the energetic relationship between tooth form and prey materials. We identify a hypothetical ‘optimal’ blade angle that minimizes strain energy costs and test alternative prey materials via virtual experiments. Using experimental data and computational models offers an integrative approach to understand the mechanics of tooth morphology. PMID:22399789

  20. Psychological distance reduces literal imitation: Evidence from an imitation-learning paradigm.

    PubMed

    Hansen, Jochim; Alves, Hans; Trope, Yaacov

    2016-03-01

    The present experiments tested the hypothesis that observers engage in more literal imitation of a model when the model is psychologically near to (vs. distant from) the observer. Participants learned to fold a dog out of towels by watching a model performing this task. Temporal (Experiment 1) and spatial (Experiment 2) distance from the model were manipulated. As predicted, participants copied more of the model's specific movements when the model was near (vs. distant). Experiment 3 replicated this finding with a paper-folding task, suggesting that distance from a model also affects imitation of less complex tasks. Perceived task difficulty, motivation, and the quality of the end product were not affected by distance. We interpret the findings as reflecting different levels of construal of the model's performance: When the model is psychologically distant, social learners focus more on the model's goal and devise their own means for achieving the goal, and as a result show less literal imitation of the model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Model Experiment of Two-Dimentional Brownian Motion by Microcomputer.

    ERIC Educational Resources Information Center

    Mishima, Nobuhiko; And Others

    1980-01-01

    Describes the use of a microcomputer in studying a model experiment (Brownian particles colliding with thermal particles). A flow chart and program for the experiment are provided. Suggests that this experiment may foster a deepened understanding through mutual dialog between the student and computer. (SK)

  2. The development of a Krook model for nonlocal transport in laser produced plasmas II. Comparisons with Fokker Planck, experiment and other models

    NASA Astrophysics Data System (ADS)

    Colombant, Denis; Manheimer, Wallace

    2008-11-01

    The Krook model described in the previous talk has been incorporated into a fluid simulation. These fluid simulations are then compared with Fokker Planck simulations and also with a recent NRL Nike experiment. We also examine several other models for electron energy transport that have been used in laser fusion research. As regards comparison with Fokker Planck simulation, the Krook model gives better agreement than the other models, especially in the time asymptotic limit. As regards the NRL experiment, all models except one give reasonable agreement.

  3. US/Canada wheat and barley crop calender exploratory experiment implementation plan

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A plan is detailed for a supplemental experiment to evaluate several crop growth stage models and crop starter models. The objective of this experiment is to provide timely information to aid in understanding crop calendars and to provide data that will allow a selection between current crop calendar models.

  4. Numerical Model of Flame Spread Over Solids in Microgravity: A Supplementary Tool for Designing a Space Experiment

    NASA Technical Reports Server (NTRS)

    Shih, Hsin-Yi; Tien, James S.; Ferkul, Paul (Technical Monitor)

    2001-01-01

    The recently developed numerical model of concurrent-flow flame spread over thin solids has been used as a simulation tool to help the designs of a space experiment. The two-dimensional and three-dimensional, steady form of the compressible Navier-Stokes equations with chemical reactions are solved. With the coupled multi-dimensional solver of the radiative heat transfer, the model is capable of answering a number of questions regarding the experiment concept and the hardware designs. In this paper, the capabilities of the numerical model are demonstrated by providing the guidance for several experimental designing issues. The test matrix and operating conditions of the experiment are estimated through the modeling results. The three-dimensional calculations are made to simulate the flame-spreading experiment with realistic hardware configuration. The computed detailed flame structures provide the insight to the data collection. In addition, the heating load and the requirements of the product exhaust cleanup for the flow tunnel are estimated with the model. We anticipate that using this simulation tool will enable a more efficient and successful space experiment to be conducted.

  5. Toddlers' imitative learning in interactive and observational contexts: the role of age and familiarity of the model.

    PubMed

    Shimpi, Priya M; Akhtar, Nameera; Moore, Chris

    2013-10-01

    Three experiments examined the effects of age and familiarity of a model on toddlers' imitative learning in observational contexts (Experiments 1, 2, and 3) and interactive contexts (Experiments 2 and 3). Experiment 1 (N=112 18-month-old toddlers) varied the age (child vs. adult) and long-term familiarity (kin vs. stranger) of the person who modeled the novel actions. Experiment 2 (N=48 18-month-olds and 48 24-month-olds) and Experiment 3 (N=48 24-month-olds) varied short-term familiarity with the model (some or none) and learning context (interactive or observational). The most striking findings were that toddlers were able to learn a new action from observing completely unfamiliar strangers who did not address them and were far less likely to imitate an unfamiliar model who directly interacted with them. These studies highlight the robustness of toddlers' observational learning and reveal limitations of learning from unfamiliar models in interactive contexts. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Grade 12 Students' Conceptual Understanding and Mental Models of Galvanic Cells before and after Learning by Using Small-Scale Experiments in Conjunction with a Model Kit

    ERIC Educational Resources Information Center

    Supasorn, Saksri

    2015-01-01

    This study aimed to develop the small-scale experiments involving electrochemistry and the galvanic cell model kit featuring the sub-microscopic level. The small-scale experiments in conjunction with the model kit were implemented based on the 5E inquiry learning approach to enhance students' conceptual understanding of electrochemistry. The…

  7. Statistical analysis of target acquisition sensor modeling experiments

    NASA Astrophysics Data System (ADS)

    Deaver, Dawne M.; Moyer, Steve

    2015-05-01

    The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.

  8. The 3D model of debriefing: defusing, discovering, and deepening.

    PubMed

    Zigmont, Jason J; Kappus, Liana J; Sudikoff, Stephanie N

    2011-04-01

    The experiential learning process involves participation in key experiences and analysis of those experiences. In health care, these experiences can occur through high-fidelity simulation or in the actual clinical setting. The most important component of this process is the postexperience analysis or debriefing. During the debriefing, individuals must reflect upon the experience, identify the mental models that led to behaviors or cognitive processes, and then build or enhance new mental models to be used in future experiences. On the basis of adult learning theory, the Kolb Experiential Learning Cycle, and the Learning Outcomes Model, we structured a framework for facilitators of debriefings entitled "the 3D Model of Debriefing: Defusing, Discovering, and Deepening." It incorporates common phases prevalent in the debriefing literature, including description of and reactions to the experience, analysis of behaviors, and application or synthesis of new knowledge into clinical practice. It can be used to enhance learning after real or simulated events. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. The Experimental Research on E-Learning Instructional Design Model Based on Cognitive Flexibility Theory

    NASA Astrophysics Data System (ADS)

    Cao, Xianzhong; Wang, Feng; Zheng, Zhongmei

    The paper reports an educational experiment on the e-Learning instructional design model based on Cognitive Flexibility Theory, the experiment were made to explore the feasibility and effectiveness of the model in promoting the learning quality in ill-structured domain. The study performed the experiment on two groups of students: one group learned through the system designed by the model and the other learned by the traditional method. The results of the experiment indicate that the e-Learning designed through the model is helpful to promote the intrinsic motivation, learning quality in ill-structured domains, ability to resolve ill-structured problem and creative thinking ability of the students.

  10. A Simplified Model of ARIS for Optimal Controller Design

    NASA Technical Reports Server (NTRS)

    Beech, Geoffrey S.; Hampton, R. David; Kross, Denny (Technical Monitor)

    2001-01-01

    Many space-science experiments require active vibration isolation. Boeing's Active Rack Isolation System (ARIS) isolates experiments at the rack (vs. experiment or sub-experiment) level, with multi e experiments per rack. An ARIS-isolated rack typically employs eight actuators and thirteen umbilicals; the umbilicals provide services such as power, data transmission, and cooling. Hampton, et al., used "Kane's method" to develop an analytical, nonlinear, rigid-body model of ARIS that includes full actuator dynamics (inertias). This model, less the umbilicals, was first implemented for simulation by Beech and Hampton; they developed and tested their model using two commercial-off-the-shelf (COTS) software packages. Rupert, et al., added umbilical-transmitted disturbances to this nonlinear model. Because the nonlinear model, even for the untethered system, is both exceedingly complex and "encapsulated" inside these COTS tools, it is largely inaccessible to ARIS controller designers. This paper shows that ISPR rattle-space constraints and small ARIS actuator masses permit considerable model simplification, without significant loss of fidelity. First, for various loading conditions, comparisons are made between the dynamic responses of the nonlinear model (untethered) and a truth model. Then comparisons are made among nonlinear, linearized, and linearized reduced-mass models. It is concluded that these three models all capture the significant system rigid-body dynamics, with the third being preferred due to its relative simplicity.

  11. Probabilistic Climate Scenario Information for Risk Assessment

    NASA Astrophysics Data System (ADS)

    Dairaku, K.; Ueno, G.; Takayabu, I.

    2014-12-01

    Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. In order to develop probabilistic regional climate information that represents the uncertainty in climate scenario experiments in Japan, we compared the physics ensemble experiments using the 60km global atmospheric model of the Meteorological Research Institute (MRI-AGCM) with multi-model ensemble experiments with global atmospheric-ocean coupled models (CMIP3) of SRES A1b scenario experiments. The MRI-AGCM shows relatively good skills particularly in tropics for temperature and geopotential height. Variability in surface air temperature of physical ensemble experiments with MRI-AGCM was within the range of one standard deviation of the CMIP3 model in the Asia region. On the other hand, the variability of precipitation was relatively well represented compared with the variation of the CMIP3 models. Models which show the similar reproducibility in the present climate shows different future climate change. We couldn't find clear relationships between present climate and future climate change in temperature and precipitation. We develop a new method to produce probabilistic information of climate change scenarios by weighting model ensemble experiments based on a regression model (Krishnamurti et al., Science, 1999). The method can be easily applicable to other regions and other physical quantities, and also to downscale to finer-scale dependent on availability of observation dataset. The prototype of probabilistic information in Japan represents the quantified structural uncertainties of multi-model ensemble experiments of climate change scenarios. Acknowledgments: This study was supported by the SOUSEI Program, funded by Ministry of Education, Culture, Sports, Science and Technology, Government of Japan.

  12. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot

    PubMed Central

    Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886

  13. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot.

    PubMed

    Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.

  14. Evaluating Productivity Predictions Under Elevated CO2 Conditions: Multi-Model Benchmarking Across FACE Experiments

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2016-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty.The Predictive Ecosystem Analyzer (PEcAn) is an informatics toolbox that wraps around an ecosystem model and can be used to help identify which factors drive uncertainty. We tested a suite of models (LPJ-GUESS, MAESPA, GDAY, CLM5, DALEC, ED2), which represent a range from low to high structural complexity, across a range of Free-Air CO2 Enrichment (FACE) experiments: the Kennedy Space Center Open Top Chamber Experiment, the Rhinelander FACE experiment, the Duke Forest FACE experiment and the Oak Ridge Experiment on CO2 Enrichment. These tests were implemented in a novel benchmarking workflow that is automated, repeatable, and generalized to incorporate different sites and ecological models. Observational data from the FACE experiments represent a first test of this flexible, extensible approach aimed at providing repeatable tests of model process representation.To identify and evaluate the assumptions causing inter-model differences we used PEcAn to perform model sensitivity and uncertainty analysis, not only to assess the components of NPP, but also to examine system processes such nutrient uptake and and water use. Combining the observed patterns of uncertainty between multiple models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.

  15. A Study of the Nature of Students' Models of Microscopic Processes in the Context of Modern Physics Experiments.

    ERIC Educational Resources Information Center

    Thacker, Beth Ann

    2003-01-01

    Interviews university students in modern physics about their understanding of three fundamental experiments. Explores their development of models of microscopic processes. Uses interactive demonstrations to probe student understanding of modern physics experiments in two high school physics classes. Analyzes the nature of students' models and the…

  16. The Lived Experience of Counselor Education Doctoral Students in the Cohort Model at Duquesne University

    ERIC Educational Resources Information Center

    Devine, Shirley

    2012-01-01

    This was a phenomenologically-oriented inquiry of the lived experiences of counselor education doctoral students in a cohort model. This inquiry sought to explore, describe, and understand students' "everyday" lived experiences in a cohort model in the Executive Doctoral Program in Counselor Education and Supervision (ExCES) at Duquesne…

  17. Anatomical Knowledge Gain through a Clay-Modeling Exercise Compared to Live and Video Observations

    ERIC Educational Resources Information Center

    Kooloos, Jan G. M.; Schepens-Franke, Annelieke N.; Bergman, Esther M.; Donders, Rogier A. R. T.; Vorstenbosch, Marc A. T. M.

    2014-01-01

    Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments,…

  18. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar

    2018-03-19

    The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  19. An experimental method to verify soil conservation by check dams on the Loess Plateau, China.

    PubMed

    Xu, X Z; Zhang, H W; Wang, G Q; Chen, S C; Dang, W Q

    2009-12-01

    A successful experiment with a physical model requires necessary conditions of similarity. This study presents an experimental method with a semi-scale physical model. The model is used to monitor and verify soil conservation by check dams in a small watershed on the Loess Plateau of China. During experiments, the model-prototype ratio of geomorphic variables was kept constant under each rainfall event. Consequently, experimental data are available for verification of soil erosion processes in the field and for predicting soil loss in a model watershed with check dams. Thus, it can predict the amount of soil loss in a catchment. This study also mentions four criteria: similarities of watershed geometry, grain size and bare land, Froude number (Fr) for rainfall event, and soil erosion in downscaled models. The efficacy of the proposed method was confirmed using these criteria in two different downscaled model experiments. The B-Model, a large scale model, simulates watershed prototype. The two small scale models, D(a) and D(b), have different erosion rates, but are the same size. These two models simulate hydraulic processes in the B-Model. Experiment results show that while soil loss in the small scale models was converted by multiplying the soil loss scale number, it was very close to that of the B-Model. Obviously, with a semi-scale physical model, experiments are available to verify and predict soil loss in a small watershed area with check dam system on the Loess Plateau, China.

  20. Models and Measurements Intercomparison 2

    NASA Technical Reports Server (NTRS)

    Park, Jae H. (Editor); Ko, Malcolm K. W. (Editor); Jackman, Charles H. (Editor); Plumb, R. Alan (Editor); Kaye, Jack A. (Editor); Sage, Karen H. (Editor)

    1999-01-01

    Models and Measurement Intercomparison II (MM II) summarizes the intercomparison of results from model simulations and observations of stratospheric species. Representatives from twenty-three modeling groups using twenty-nine models participated in these MM II exercises between 1996 and 1999. Twelve of the models were two- dimensional zonal-mean models while seventeen were three-dimensional models. This was an international effort as seven were from outside the United States. Six transport experiments and five chemistry experiments were designed for various models. Models participating in the transport experiments performed simulations of chemically inert tracers providing diagnostics for transport. The chemistry experiments involved simulating the distributions of chemically active trace cases including ozone. The model run conditions for dynamics and chemistry were prescribed in order to minimize the factors that caused differences in the models. The report includes a critical review of the results by the participants and a discussion of the causes of differences between modeled and measured results as well as between results from different models, A sizable effort went into preparation of the database of the observations. This included a new climatology for ozone. The report should help in evaluating the results from various predictive models for assessing humankind perturbations of the stratosphere.

  1. Study on model design and dynamic similitude relations of vibro-acoustic experiment for elastic cavity

    NASA Astrophysics Data System (ADS)

    Shi, Ao; Lu, Bo; Yang, Dangguo; Wang, Xiansheng; Wu, Junqiang; Zhou, Fangqi

    2018-05-01

    Coupling between aero-acoustic noise and structural vibration under high-speed open cavity flow-induced oscillation may bring about severe random vibration of the structure, and even cause structure to fatigue destruction, which threatens the flight safety. Carrying out the research on vibro-acoustic experiments of scaled down model is an effective means to clarify the effects of high-intensity noise of cavity on structural vibration. Therefore, in allusion to the vibro-acoustic experiments of cavity in wind tunnel, taking typical elastic cavity as the research object, dimensional analysis and finite element method were adopted to establish the similitude relations of structural inherent characteristics and dynamics for distorted model, and verifying the proposed similitude relations by means of experiments and numerical simulation. Research shows that, according to the analysis of scale-down model, the established similitude relations can accurately simulate the structural dynamic characteristics of actual model, which provides theoretic guidance for structural design and vibro-acoustic experiments of scaled down elastic cavity model.

  2. Fiber-optical sensor with intensity compensation model in college teaching of physics experiment

    NASA Astrophysics Data System (ADS)

    Su, Liping; Zhang, Yang; Li, Kun; Zhang, Yu

    2017-08-01

    Optical fiber sensor technology is one of the main contents of modern information technology, which has a very important position in modern science and technology. Fiber optic sensor experiment can improve students' enthusiasm and broaden their horizons in college physics experiment. In this paper the main structure and working principle of fiberoptical sensor with intensity compensation model are introduced. And thus fiber-optical sensor with intensity compensation model is applied to measure micro displacement of Young's modulus measurement experiment and metal linear expansion coefficient measurement experiment in the college physics experiment. Results indicate that the measurement accuracy of micro displacement is higher than that of the traditional methods using fiber-optical sensor with intensity compensation model. Meanwhile this measurement method makes the students understand on the optical fiber, sensor and nature of micro displacement measurement method and makes each experiment strengthen relationship and compatibility, which provides a new idea for the reform of experimental teaching.

  3. 'Chain pooling' model selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.

  4. Computational modeling of joint U.S.-Russian experiments relevant to magnetic compression/magnetized target fusion (MAGO/MTF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheehey, P.T.; Faehl, R.J.; Kirkpatrick, R.C.

    1997-12-31

    Magnetized Target Fusion (MTF) experiments, in which a preheated and magnetized target plasma is hydrodynamically compressed to fusion conditions, present some challenging computational modeling problems. Recently, joint experiments relevant to MTF (Russian acronym MAGO, for Magnitnoye Obzhatiye, or magnetic compression) have been performed by Los Alamos National Laboratory and the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF). Modeling of target plasmas must accurately predict plasma densities, temperatures, fields, and lifetime; dense plasma interactions with wall materials must be characterized. Modeling of magnetically driven imploding solid liners, for compression of target plasmas, must address issues such as Rayleigh-Taylor instability growthmore » in the presence of material strength, and glide plane-liner interactions. Proposed experiments involving liner-on-plasma compressions to fusion conditions will require integrated target plasma and liner calculations. Detailed comparison of the modeling results with experiment will be presented.« less

  5. A backwards glance at words: Using reversed-interior masked primes to test models of visual word identification

    PubMed Central

    Lupker, Stephen J.

    2017-01-01

    The experiments reported here used “Reversed-Interior” (RI) primes (e.g., cetupmor-COMPUTER) in three different masked priming paradigms in order to test between different models of orthographic coding/visual word recognition. The results of Experiment 1, using a standard masked priming methodology, showed no evidence of priming from RI primes, in contrast to the predictions of the Bayesian Reader and LTRS models. By contrast, Experiment 2, using a sandwich priming methodology, showed significant priming from RI primes, in contrast to the predictions of open bigram models, which predict that there should be no orthographic similarity between these primes and their targets. Similar results were obtained in Experiment 3, using a masked prime same-different task. The results of all three experiments are most consistent with the predictions derived from simulations of the Spatial-coding model. PMID:29244824

  6. Watching Faults Grow in Sand

    NASA Astrophysics Data System (ADS)

    Cooke, M. L.

    2015-12-01

    Accretionary sandbox experiments provide a rich environment for investigating the processes of fault development. These experiments engage students because 1) they enable direct observation of fault growth, which is impossible in the crust (type 1 physical model), 2) they are not only representational but can also be manipulated (type 2 physical model), 3) they can be used to test hypotheses (type 3 physical model) and 4) they resemble experiments performed by structural geology researchers around the world. The structural geology courses at UMass Amherst utilize a series of accretionary sandboxes experiments where students first watch a video of an experiment and then perform a group experiment. The experiments motivate discussions of what conditions they would change and what outcomes they would expect from these changes; hypothesis development. These discussions inevitably lead to calculations of the scaling relationships between model and crustal fault growth and provide insight into the crustal processes represented within the dry sand. Sketching of the experiments has been shown to be a very effective assessment method as the students reveal which features they are analyzing. Another approach used at UMass is to set up a forensic experiment. The experiment is set up with spatially varying basal friction before the meeting and students must figure out what the basal conditions are through the experiment. This experiment leads to discussions of equilibrium and force balance within the accretionary wedge. Displacement fields can be captured throughout the experiment using inexpensive digital image correlation techniques to foster quantitative analysis of the experiments.

  7. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  8. Looking beyond Lewis Structures: A General Chemistry Molecular Modeling Experiment Focusing on Physical Properties and Geometry

    ERIC Educational Resources Information Center

    Linenberger, Kimberly J.; Cole, Renee S.; Sarkar, Somnath

    2011-01-01

    We present a guided-inquiry experiment using Spartan Student Version, ready to be adapted and implemented into a general chemistry laboratory course. The experiment provides students an experience with Spartan Molecular Modeling software while discovering the relationships between the structure and properties of molecules. Topics discussed within…

  9. Model-Based Analysis of Biopharmaceutic Experiments To Improve Mechanistic Oral Absorption Modeling: An Integrated in Vitro in Vivo Extrapolation Perspective Using Ketoconazole as a Model Drug.

    PubMed

    Pathak, Shriram M; Ruff, Aaron; Kostewicz, Edmund S; Patel, Nikunjkumar; Turner, David B; Jamei, Masoud

    2017-12-04

    Mechanistic modeling of in vitro data generated from metabolic enzyme systems (viz., liver microsomes, hepatocytes, rCYP enzymes, etc.) facilitates in vitro-in vivo extrapolation (IVIV_E) of metabolic clearance which plays a key role in the successful prediction of clearance in vivo within physiologically-based pharmacokinetic (PBPK) modeling. A similar concept can be applied to solubility and dissolution experiments whereby mechanistic modeling can be used to estimate intrinsic parameters required for mechanistic oral absorption simulation in vivo. However, this approach has not widely been applied within an integrated workflow. We present a stepwise modeling approach where relevant biopharmaceutics parameters for ketoconazole (KTZ) are determined and/or confirmed from the modeling of in vitro experiments before being directly used within a PBPK model. Modeling was applied to various in vitro experiments, namely: (a) aqueous solubility profiles to determine intrinsic solubility, salt limiting solubility factors and to verify pK a ; (b) biorelevant solubility measurements to estimate bile-micelle partition coefficients; (c) fasted state simulated gastric fluid (FaSSGF) dissolution for formulation disintegration profiling; and (d) transfer experiments to estimate supersaturation and precipitation parameters. These parameters were then used within a PBPK model to predict the dissolved and total (i.e., including the precipitated fraction) concentrations of KTZ in the duodenum of a virtual population and compared against observed clinical data. The developed model well characterized the intraluminal dissolution, supersaturation, and precipitation behavior of KTZ. The mean simulated AUC 0-t of the total and dissolved concentrations of KTZ were comparable to (within 2-fold of) the corresponding observed profile. Moreover, the developed PBPK model of KTZ successfully described the impact of supersaturation and precipitation on the systemic plasma concentration profiles of KTZ for 200, 300, and 400 mg doses. These results demonstrate that IVIV_E applied to biopharmaceutical experiments can be used to understand and build confidence in the quality of the input parameters and mechanistic models used for mechanistic oral absorption simulations in vivo, thereby improving the prediction performance of PBPK models. Moreover, this approach can inform the selection and design of in vitro experiments, potentially eliminating redundant experiments and thus helping to reduce the cost and time of drug product development.

  10. A prospective earthquake forecast experiment in the western Pacific

    NASA Astrophysics Data System (ADS)

    Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan

    2012-09-01

    Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

  11. On viewer motivation, unit of analysis, and the VIMAP. Comment on "Move me, astonish me ... delight my eyes and brain: The Vienna Integrated Model of top-down and bottom-up processes in Art Perception (VIMAP) and corresponding affective, evaluative, and neurophysiological correlates" by Matthew Pelowski et al.

    NASA Astrophysics Data System (ADS)

    Tinio, Pablo P. L.

    2017-07-01

    The Vienna Integrated Model of Art Perception (VIMAP; [5]) is the most comprehensive model of the art experience today. The model incorporates bottom-up and top-down cognitive processes and accounts for different outcomes of the art experience, such as aesthetic evaluations, emotions, and physiological and neurological responses to art. In their presentation of the model, Pelowski et al. also present hypotheses that are amenable to empirical testing. These features make the VIMAP an ambitious model that attempts to explain how meaningful, complex, and profound aspects of the art experience come about, which is a significant extension of previous models of the art experience (e.g., [1-3,10]), and which gives the VIMAP good explanatory power.

  12. A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, O. S.; Callahan, D. A.; Cerjan, C. J.

    A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less

  13. A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments

    DOE PAGES

    Jones, O. S.; Callahan, D. A.; Cerjan, C. J.; ...

    2012-05-29

    A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less

  14. Shopper marketing: a new challenge for Spanish community pharmacies.

    PubMed

    Gavilan, Diana; Avello, Maria; Abril, Carmen

    2014-01-01

    Changes that have occurred over the past few decades in retailing and in the health care sector--namely, a drastic reduction in drug profit-margins, and a more critical use of health services by patients--have created a scenario characterized by rising competitiveness. This new context is necessitating community pharmacies (hereafter, pharmacies) to improve their business model through new strategies. Shopper marketing has proven invaluable in other retail settings and therefore, could be a critical element for new practices in pharmacies. First, to analyze how shopping experiences in pharmacies based on new practices in shopper marketing affect shopping behavior. Second, to study the mediating effect of customer satisfaction on the relationship between shopping experiences and shopping behavior. A self-reported questionnaire was developed to measure four concepts: hedonic experience (enjoyable), functional experience (goal-oriented), customer satisfaction and shopping behavior. Data were collected from 28 different pharmacies dispersed throughout Spain. Structural equation modeling (SEM) was used to test the relationships in the theoretical model. First, the measurement model was estimated to assess model fit, reliability, convergent and discriminant validity. Then, the parameters of the structural model were estimated and the mediation effects were subsequently tested. Functional experience and hedonic experience each significantly and positively correlate with consumer satisfaction and with customer shopping behavior (purchases and loyalty). Moreover, the effects of each type of experience on shopping behavior are partially mediated by customer satisfaction. The results suggest that even in Spanish pharmacies, which have traditionally been considered as strictly functional retailers, ensuring customer satisfaction and enhancing shopping behavior now demand more than just functional experiences. Moreover, a customer's experience at a pharmacy can itself trigger a shopping cycle; therefore, pharmacists should consider prioritizing investments in hedonic experiences. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Self-optimisation and model-based design of experiments for developing a C-H activation flow process.

    PubMed

    Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei

    2017-01-01

    A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  16. Generative Modeling for Machine Learning on the D-Wave

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thulasidasan, Sunil

    These are slides on Generative Modeling for Machine Learning on the D-Wave. The following topics are detailed: generative models; Boltzmann machines: a generative model; restricted Boltzmann machines; learning parameters: RBM training; practical ways to train RBM; D-Wave as a Boltzmann sampler; mapping RBM onto the D-Wave; Chimera restricted RBM; mapping binary RBM to Ising model; experiments; data; D-Wave effective temperature, parameters noise, etc.; experiments: contrastive divergence (CD) 1 step; after 50 steps of CD; after 100 steps of CD; D-Wave (experiments 1, 2, 3); D-Wave observations.

  17. Implementation of an object oriented track reconstruction model into multiple LHC experiments*

    NASA Astrophysics Data System (ADS)

    Gaines, Irwin; Gonzalez, Saul; Qian, Sijin

    2001-10-01

    An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.

  18. A new Geoengineering Model Intercomparison Project (GeoMIP) experiment designed for climate and chemistry models

    DOE PAGES

    Tilmes, S.; Mills, Mike; Niemeier, Ulrike; ...

    2015-01-15

    A new Geoengineering Model Intercomparison Project (GeoMIP) experiment "G4 specified stratospheric aerosols" (short name: G4SSA) is proposed to investigate the impact of stratospheric aerosol geoengineering on atmosphere, chemistry, dynamics, climate, and the environment. In contrast to the earlier G4 GeoMIP experiment, which requires an emission of sulfur dioxide (SO₂) into the model, a prescribed aerosol forcing file is provided to the community, to be consistently applied to future model experiments between 2020 and 2100. This stratospheric aerosol distribution, with a total burden of about 2 Tg S has been derived using the ECHAM5-HAM microphysical model, based on a continuous annualmore » tropical emission of 8 Tg SO₂ yr⁻¹. A ramp-up of geoengineering in 2020 and a ramp-down in 2070 over a period of 2 years are included in the distribution, while a background aerosol burden should be used for the last 3 decades of the experiment. The performance of this experiment using climate and chemistry models in a multi-model comparison framework will allow us to better understand the impact of geoengineering and its abrupt termination after 50 years in a changing environment. The zonal and monthly mean stratospheric aerosol input data set is available at https://www2.acd.ucar.edu/gcm/geomip-g4-specified-stratospheric-aerosol-data-set.« less

  19. Colloid-Facilitated Transport of 137Cs in Fracture-Fill Material. Experiments and Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dittrich, Timothy M.; Reimus, Paul William

    2015-10-29

    In this study, we demonstrate how a combination of batch sorption/desorption experiments and column transport experiments were used to effectively parameterize a model describing the colloid-facilitated transport of Cs in the Grimsel granodiorite/FFM system. Cs partition coefficient estimates onto both the colloids and the stationary media obtained from the batch experiments were used as initial estimates of partition coefficients in the column experiments, and then the column experiment results were used to obtain refined estimates of the number of different sorption sites and the adsorption and desorption rate constants of the sites. The desorption portion of the column breakthrough curvesmore » highlighted the importance of accounting for adsorption-desorption hysteresis (or a very nonlinear adsorption isotherm) of the Cs on the FFM in the model, and this portion of the breakthrough curves also dictated that there be at least two different types of sorption sites on the FFM. In the end, the two-site model parameters estimated from the column experiments provided excellent matches to the batch adsorption/desorption data, which provided a measure of assurance in the validity of the model.« less

  20. Psychiatry's next top model: cause for a re-think on drug models of psychosis and other psychiatric disorders.

    PubMed

    Carhart-Harris, R L; Brugger, S; Nutt, D J; Stone, J M

    2013-09-01

    Despite the widespread application of drug modelling in psychiatric research, the relative value of different models has never been formally compared in the same analysis. Here we compared the effects of five drugs (cannabis, psilocybin, amphetamine, ketamine and alcohol) in relation to psychiatric symptoms in a two-part subjective analysis. In the first part, mental health professionals associated statements referring to specific experiences, for example 'I don't bother to get out of bed', to one or more psychiatric symptom clusters, for example depression and negative psychotic symptoms. This measured the specificity of an experience for a particular disorder. In the second part, individuals with personal experience with each of the above-listed drugs were asked how reliably each drug produced the experiences listed in part 1, both acutely and sub-acutely. Part 1 failed to find any experiences that were specific for negative or cognitive psychotic symptoms over depression. The best model of positive symptoms was psilocybin and the best models overall were the acute alcohol and amphetamine models of mania. These results challenge current assumptions about drug models and motivate further research on this understudied area.

  1. Teaching a Student to Read through a Screen: Using SKYPE to Facilitate a Field Experience

    ERIC Educational Resources Information Center

    Gunther, Jeanne

    2016-01-01

    The Distance Clinical Connecting Candidates and Children (DC4) is an innovative new model for providing a clinical experience in a reading methods course. Pre-service teachers used this model to implement assessments and lessons via SKYPE with local elementary students. I designed this model to provide a clinical experience when faced with…

  2. The International Heat Stress Genotype Experiment for modeling wheat response to heat: field experiments and AgMIP-Wheat multi-model simulations

    USDA-ARS?s Scientific Manuscript database

    The data set contains a portion of the International Heat Stress Genotype Experiment (IHSGE) data used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat crop models and quantify the impact of heat on global wheat yield productivity. It includes two spring wheat cultivars grown during...

  3. Integration of remote sensing and hydrologic modeling through multi-disciplinary semiarid field campaigns: Moonsoon 1990, Walnut Gulch 1992, and SALSA-MEX

    NASA Technical Reports Server (NTRS)

    Moran, M. S.; Goodrich, D. C.; Kustas, W. P.

    1994-01-01

    A research and modeling strategy is presented for development of distributed hydrologic models given by a combination of remotely sensed and ground based data. In support of this strategy, two experiments Moonsoon'90 and Walnut Gulch'92 were conducted in a semiarid rangeland southeast of Tucson, Arizona, (U.S.) and a third experiment, the SALSA-MEX (Semi Arid Land Surface Atmospheric Mountain Experiment) was proposed. Results from the Moonsoon'90 experiment substantially advanced the understanding of the hydrologic and atmospheric fluxes in an arid environment and provided insight into the use of remote sensing data for hydrologic modeling. The Walnut Gulch'92 experiment addressed the seasonal hydrologic dynamics of the region and the potential of combined optical microwave remote sensing for hydrologic applications. SALSA-MEX will combine measurements and modeling to study hydrologic processes influenced by surrounding mountains, such as enhanced precipitation, snowmelt and recharge to ground water aquifers. The results from these experiments, along with the extensive experimental data bases, should aid the research community in large scale modeling of mass and energy exchanges across the soil-plant-atmosphere interface.

  4. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): Simulation design and preliminary results

    DOE PAGES

    Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; ...

    2015-10-27

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale formore » those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.« less

  5. A Study of Experience Credit for Professional Engineering Licensure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, M.A.

    2003-08-11

    Oak Ridge National Laboratory performed a study of experience credit for professional engineering licensure for the Department of Energy's Industrial Assessment Center (IAC) Program. One of the study's goals was to determine how state licensure boards grant experience credit for engineering licensure, particularly in regards to IAC experience and experience prior to graduation. Another goal involved passing IAC information to state licensure boards to allow the boards to become familiar with the program and determine if they would grant credit to IAC graduates. The National Council of Examiners for Engineers and Surveyors (NCEES) has adopted a document, the ''Model Law''.more » This document empowers states to create state engineering boards and oversee engineering licensure. The board can also interpret and adopt rules and regulations. The Model Law also gives a general ''process'' for engineering licensure, the ''Model Law Engineer''. The Model Law Engineer requires that an applicant for professional licensure, or professional engineering (PE) licensure, obtain a combination of formal education and professional experience and successfully complete the fundamentals of engineering (FE) and PE exams. The Model Law states that a PE applicant must obtain four years of ''acceptable'' engineering experience after graduation to be allowed to sit for the PE exam. Although the Model Law defines ''acceptable experience,'' it is somewhat open to interpretation, and state boards decide whether applicants have accumulated the necessary amount of experience. The Model Law also allows applicants one year of credit for postgraduate degrees as well as experience credit for teaching courses in engineering. The Model Law grants states the power to adopt and amend the bylaws and rules of the Model Law licensure process. It allows state boards the freedom to modify the experience requirements for professional licensure. This power has created variety in experience requirements, and licensure requirements can differ from state to state. Before this study began, six questions were developed to help document how state boards grant experience credit. Many of the questions were formulated to determine how states deal with teaching experience, postgraduate credit, experience prior to graduation, PE and FE waivers, and the licensure process in general. Data were collected from engineering licensure boards for each of the fifty states and the District of Columbia. Telephone interviews were the primary method of data collection, while email correspondence was also used to a lesser degree. Prior to contacting each board, the researchers attempted to review each state's licensure web site. Based on the data collected, several trends and patterns were identified. For example, there is a general trend away from offering credit for experience prior to graduation. The issue becomes a problem when a PE from one state attempts to gain a license in another state by comity or endorsement. Tennessee and Kansas have recently stopped offering this credit and Mississippi cautions applicants that it could be difficult to obtain licensure in other states.« less

  6. Relationship between a solar drying model of red pepper and the kinetics of pure water evaporation (1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passamai, V.; Saravia, L.

    1997-05-01

    Drying of red pepper under solar radiation was investigated, and a simple model related to water evaporation was developed. Drying experiments at constant laboratory conditions were undertaken where solar radiation was simulated by a 1,000 W lamp. In this first part of the work, water evaporation under radiation is studied and laboratory experiments are presented with two objectives: to verify Penman`s model of evaporation under radiation, and to validate the laboratory experiments. Modifying Penman`s model of evaporation by introducing two drying conductances as a function of water content, allows the development of a drying model under solar radiation. In themore » second part of this paper, the model is validated by applying it to red pepper open air solar drying experiments.« less

  7. A general model-based design of experiments approach to achieve practical identifiability of pharmacokinetic and pharmacodynamic models.

    PubMed

    Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio

    2013-08-01

    The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.

  8. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  9. Analysis of Time-Series Quasi-Experiments. Final Report.

    ERIC Educational Resources Information Center

    Glass, Gene V.; Maguire, Thomas O.

    The objective of this project was to investigate the adequacy of statistical models developed by G. E. P. Box and G. C. Tiao for the analysis of time-series quasi-experiments: (1) The basic model developed by Box and Tiao is applied to actual time-series experiment data from two separate experiments, one in psychology and one in educational…

  10. Structuring research methods and data with the research object model: genomics workflows as a case study.

    PubMed

    Hettne, Kristina M; Dharuri, Harish; Zhao, Jun; Wolstencroft, Katherine; Belhajjame, Khalid; Soiland-Reyes, Stian; Mina, Eleni; Thompson, Mark; Cruickshank, Don; Verdes-Montenegro, Lourdes; Garrido, Julian; de Roure, David; Corcho, Oscar; Klyne, Graham; van Schouwen, Reinout; 't Hoen, Peter A C; Bechhofer, Sean; Goble, Carole; Roos, Marco

    2014-01-01

    One of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows. We present the application of the workflow-centric RO model for our bioinformatics case study. Three workflows were produced following recently defined Best Practices for workflow design. By modelling the experiment as an RO, we were able to automatically query the experiment and answer questions such as "which particular data was input to a particular workflow to test a particular hypothesis?", and "which particular conclusions were drawn from a particular workflow?". Applying a workflow-centric RO model to aggregate and annotate the resources used in a bioinformatics experiment, allowed us to retrieve the conclusions of the experiment in the context of the driving hypothesis, the executed workflows and their input data. The RO model is an extendable reference model that can be used by other systems as well. The Research Object is available at http://www.myexperiment.org/packs/428 The Wf4Ever Research Object Model is available at http://wf4ever.github.io/ro.

  11. Designing Experiments to Discriminate Families of Logic Models.

    PubMed

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration.

  12. A simple analytical infiltration model for short-duration rainfall

    NASA Astrophysics Data System (ADS)

    Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming

    2017-12-01

    Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.

  13. Modeling of the jack rabbit series of experiments with a temperature based reactive burn model

    NASA Astrophysics Data System (ADS)

    Desbiens, Nicolas

    2017-01-01

    The Jack Rabbit experiments, performed by Lawrence Livermore National Laboratory, focus on detonation wave corner turning and shock desensitization. Indeed, while important for safety or charge design, the behaviour of explosives in these regimes is poorly understood. In this paper, our temperature based reactive burn model is calibrated for LX-17 and compared to the Jack Rabbit data. It is shown that our model can reproduce the corner turning and shock desensitization behaviour of four out of the five experiments.

  14. An experiment with interactive planning models

    NASA Technical Reports Server (NTRS)

    Beville, J.; Wagner, J. H.; Zannetos, Z. S.

    1970-01-01

    Experiments on decision making in planning problems are described. Executives were tested in dealing with capital investments and competitive pricing decisions under conditions of uncertainty. A software package, the interactive risk analysis model system, was developed, and two controlled experiments were conducted. It is concluded that planning models can aid management, and predicted uses of the models are as a central tool, as an educational tool, to improve consistency in decision making, to improve communications, and as a tool for consensus decision making.

  15. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  16. The Design of an Instructional Model Based on Connectivism and Constructivism to Create Innovation in Real World Experience

    ERIC Educational Resources Information Center

    Jirasatjanukul, Kanokrat; Jeerungsuwan, Namon

    2018-01-01

    The objectives of the research were to (1) design an instructional model based on Connectivism and Constructivism to create innovation in real world experience, (2) assess the model designed--the designed instructional model. The research involved 2 stages: (1) the instructional model design and (2) the instructional model rating. The sample…

  17. Teaching Human Genetics with Mustard: Rapid Cycling "Brassica rapa" (Fast Plants Type) as a Model for Human Genetics in the Classroom Laboratory

    ERIC Educational Resources Information Center

    Wendell, Douglas L.; Pickard, Dawn

    2007-01-01

    We have developed experiments and materials to model human genetics using rapid cycling "Brassica rapa", also known as Fast Plants. Because of their self-incompatibility for pollination and the genetic diversity within strains, "B. rapa" can serve as a relevant model for human genetics in teaching laboratory experiments. The experiment presented…

  18. NASA Experience with CMM and CMMI

    NASA Technical Reports Server (NTRS)

    Crumbley, Tim; Kelly, John C.

    2010-01-01

    This slide presentation reviews the experience NASA has had in using Capability Maturity Model (CMM) and Capability Maturity Model Integration (CMMI). In particular this presentation reviews the agency's experience within the software engineering discipline and the lessons learned and key impacts from using CMMI.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atamturktur, Sez; Unal, Cetin; Hemez, Francois

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed frameworkmore » is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this framework, the project team has focused on optimizing resource allocation for improving numerical models through further code development and experimentation. Related to further code development, we have developed a code prioritization index (CPI) for coupled numerical models. CPI is implemented to effectively improve the predictive capability of the coupled model by increasing the sophistication of constituent codes. In relation to designing new experiments, we investigated the information gained by the addition of each new experiment used for calibration and bias correction of a simulation model. Additionally, the variability of ‘information gain’ through the design domain has been investigated in order to identify the experiment settings where maximum information gain occurs and thus guide the experimenters in the selection of the experiment settings. This idea was extended to evaluate the information gain from each experiment can be improved by intelligently selecting the experiments, leading to the development of the Batch Sequential Design (BSD) technique. Additionally, we evaluated the importance of sufficiently exploring the domain of applicability in experiment-based validation of high-consequence modeling and simulation by developing a new metric to quantify coverage. This metric has also been incorporated into the design of new experiments. Finally, we have proposed a data-aware calibration approach for the calibration of numerical models. This new method considers the complexity of a numerical model (the number of parameters to be calibrated, parameter uncertainty, and form of the model) and seeks to identify the number of experiments necessary to calibrate the model based on the level of sophistication of the physics. The final component in the project team’s work to improve model calibration and validation methods is the incorporation of robustness to non-probabilistic uncertainty in the input parameters. This is an improvement to model validation and uncertainty quantification stemming beyond the originally proposed scope of the project. We have introduced a new metric for incorporating the concept of robustness into experiment-based validation of numerical models. This project has accounted for the graduation of two Ph.D. students (Kendra Van Buren and Josh Hegenderfer) and two M.S. students (Matthew Egeberg and Parker Shields). One of the doctoral students is now working in the nuclear engineering field and the other one is a post-doctoral fellow at the Los Alamos National Laboratory. Additionally, two more Ph.D. students (Garrison Stevens and Tunc Kulaksiz) who are working towards graduation have been supported by this project.« less

  20. Predictions of Cockpit Simulator Experimental Outcome Using System Models

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Goka, T.

    1984-01-01

    This study involved predicting the outcome of a cockpit simulator experiment where pilots used cockpit displays of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. The experiments were run on the NASA Ames Research Center multicab cockpit simulator facility. Prior to the experiments, a mathematical model of the pilot/aircraft/CDTI flight system was developed which included relative in-trail and vertical dynamics between aircraft in the approach string. This model was used to construct a digital simulation of the string dynamics including response to initial position errors. The model was then used to predict the outcome of the in-trail following cockpit simulator experiments. Outcome included performance and sensitivity to different separation criteria. The experimental results were then used to evaluate the model and its prediction accuracy. Lessons learned in this modeling and prediction study are noted.

  1. Assimilative modeling of low latitude ionosphere

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Wang, Chunining; Hajj, George A.; Rosen, I. Gary; Wilson, Brian D.; Mannucci, Anthony J.

    2004-01-01

    In this paper we present an observation system simulation experiment for modeling low-latitude ionosphere using a 3-dimensional (3-D) global assimilative ionospheric model (GAIM). The experiment is conducted to test the effectiveness of GAIM with a 4-D variational approach (4DVAR) in estimation of the ExB drift and thermospheric wind in the magnetic meridional planes simultaneously for all longitude or local time sectors. The operational Global Positioning System (GPS) satellites and the ground-based global GPS receiver network of the International GPS Service are used in the experiment as the data assimilation source. 'The optimization of the ionospheric state (electron density) modeling is performed through a nonlinear least-squares minimization process that adjusts the dynamical forces to reduce the difference between the modeled and observed slant total electron content in the entire modeled region. The present experiment for multiple force estimations reinforces our previous assessment made through single driver estimations conducted for the ExB drift only.

  2. Overview of experiment design and comparison of models participating in phase 1 of the SPARC Quasi-Biennial Oscillation initiative (QBOi)

    NASA Astrophysics Data System (ADS)

    Butchart, Neal; Anstey, James A.; Hamilton, Kevin; Osprey, Scott; McLandress, Charles; Bushell, Andrew C.; Kawatani, Yoshio; Kim, Young-Ha; Lott, Francois; Scinocca, John; Stockdale, Timothy N.; Andrews, Martin; Bellprat, Omar; Braesicke, Peter; Cagnazzo, Chiara; Chen, Chih-Chieh; Chun, Hye-Yeong; Dobrynin, Mikhail; Garcia, Rolando R.; Garcia-Serrano, Javier; Gray, Lesley J.; Holt, Laura; Kerzenmacher, Tobias; Naoe, Hiroaki; Pohlmann, Holger; Richter, Jadwiga H.; Scaife, Adam A.; Schenzinger, Verena; Serva, Federico; Versick, Stefan; Watanabe, Shingo; Yoshida, Kohei; Yukimoto, Seiji

    2018-03-01

    The Stratosphere-troposphere Processes And their Role in Climate (SPARC) Quasi-Biennial Oscillation initiative (QBOi) aims to improve the fidelity of tropical stratospheric variability in general circulation and Earth system models by conducting coordinated numerical experiments and analysis. In the equatorial stratosphere, the QBO is the most conspicuous mode of variability. Five coordinated experiments have therefore been designed to (i) evaluate and compare the verisimilitude of modelled QBOs under present-day conditions, (ii) identify robustness (or alternatively the spread and uncertainty) in the simulated QBO response to commonly imposed changes in model climate forcings (e.g. a doubling of CO2 amounts), and (iii) examine model dependence of QBO predictability. This paper documents these experiments and the recommended output diagnostics. The rationale behind the experimental design and choice of diagnostics is presented. To facilitate scientific interpretation of the results in other planned QBOi studies, consistent descriptions of the models performing each experiment set are given, with those aspects particularly relevant for simulating the QBO tabulated for easy comparison.

  3. Field warming experiments shed light on the wheat yield response to temperature in China

    PubMed Central

    Zhao, Chuang; Piao, Shilong; Huang, Yao; Wang, Xuhui; Ciais, Philippe; Huang, Mengtian; Zeng, Zhenzhong; Peng, Shushi

    2016-01-01

    Wheat growth is sensitive to temperature, but the effect of future warming on yield is uncertain. Here, focusing on China, we compiled 46 observations of the sensitivity of wheat yield to temperature change (SY,T, yield change per °C) from field warming experiments and 102 SY,T estimates from local process-based and statistical models. The average SY,T from field warming experiments, local process-based models and statistical models is −0.7±7.8(±s.d.)% per °C, −5.7±6.5% per °C and 0.4±4.4% per °C, respectively. Moreover, SY,T is different across regions and warming experiments indicate positive SY,T values in regions where growing-season mean temperature is low, and water supply is not limiting, and negative values elsewhere. Gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project appear to capture the spatial pattern of SY,T deduced from warming observations. These results from local manipulative experiments could be used to improve crop models in the future. PMID:27853151

  4. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less

  5. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    DOE PAGES

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-29

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less

  6. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    NASA Astrophysics Data System (ADS)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-01

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.

  7. Understanding Coupled Earth-Surface Processes through Experiments and Models (Invited)

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kim, W.

    2013-12-01

    Traditionally, both numerical models and experiments have been purposefully designed to ';isolate' singular components or certain processes of a larger mountain to deep-ocean interconnected source-to-sink (S2S) transport system. Controlling factors driven by processes outside of the domain of immediate interest were treated and simplified as input or as boundary conditions. Increasingly, earth surface processes scientists appreciate feedbacks and explore these feedbacks with more dynamically coupled approaches to their experiments and models. Here, we discuss key concepts and recent advances made in coupled modeling and experimental setups. In addition, we emphasize challenges and new frontiers to coupled experiments. Experiments have highlighted the important role of self-organization; river and delta systems do not always need to be forced by external processes to change or develop characteristic morphologies. Similarly modeling f.e. has shown that intricate networks in tidal deltas are stable because of the interplay between river avulsions and the tidal current scouring with both processes being important to develop and maintain the dentritic networks. Both models and experiment have demonstrated that seemingly stable systems can be perturbed slightly and show dramatic responses. Source-to-sink models were developed for both the Fly River System in Papua New Guinea and the Waipaoa River in New Zealand. These models pointed to the importance of upstream-downstream effects and enforced our view of the S2S system as a signal transfer and dampening conveyor belt. Coupled modeling showed that deforestation had extreme effects on sediment fluxes draining from the catchment of the Waipaoa River in New Zealand, and that this increase in sediment production rapidly shifted the locus of offshore deposition. The challenge in designing coupled models and experiments is both technological as well as intellectual. Our community advances to make numerical model coupling more straightforward through common interfaces and standardization of time-stepping, model domains and model parameters. At the same time major steps forward require an interdisciplinary approach, wherein the source to sink system contains ecological feedbacks and human actors.

  8. Neutral null models for diversity in serial transfer evolution experiments.

    PubMed

    Harpak, Arbel; Sella, Guy

    2014-09-01

    Evolution experiments with microorganisms coupled with genome-wide sequencing now allow for the systematic study of population genetic processes under a wide range of conditions. In learning about these processes in natural, sexual populations, neutral models that describe the behavior of diversity and divergence summaries have played a pivotal role. It is therefore natural to ask whether neutral models, suitably modified, could be useful in the context of evolution experiments. Here, we introduce coalescent models for polymorphism and divergence under the most common experimental evolution assay, a serial transfer experiment. This relatively simple setting allows us to address several issues that could affect diversity patterns in evolution experiments, whether selection is operating or not: the transient behavior of neutral polymorphism in an experiment beginning from a single clone, the effects of randomness in the timing of cell division and noisiness in population size in the dilution stage. In our analyses and discussion, we emphasize the implications for experiments aimed at measuring diversity patterns and making inferences about population genetic processes based on these measurements. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  9. The not-so-sterile 4th neutrino: constraints on new gauge interactions from neutrino oscillation experiments

    NASA Astrophysics Data System (ADS)

    Kopp, Joachim; Welter, Johannes

    2014-12-01

    Sterile neutrino models with new gauge interactions in the sterile sector are phenomenologically interesting since they can lead to novel effects in neutrino oscillation experiments, in cosmology and in dark matter detectors, possibly even explaining some of the observed anomalies in these experiments. Here, we use data from neutrino oscillation experiments, in particular from MiniBooNE, MINOS and solar neutrino experiments, to constrain such models. We focus in particular on the case where the sterile sector gauge boson A ' couples also to Standard Model particles (for instance to the baryon number current) and thus induces a large Mikheyev-Smirnov-Wolfenstein potential. For eV-scale sterile neutrinos, we obtain strong constraints especially from MINOS, which restricts the strength of the new interaction to be less than ˜ 10 times that of the Standard Model weak interaction unless active-sterile neutrino mixing is very small (sin2 θ 24 ≲ 10-3). This rules out gauge forces large enough to affect short-baseline experiments like MiniBooNE and it imposes nontrivial constraints on signals from sterile neutrino scattering in dark matter experiments.

  10. Mineral solubility and free energy controls on microbial reaction kinetics: Application to contaminant transport in the subsurface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taillefert, Martial; Van Cappellen, Philippe

    Recent developments in the theoretical treatment of geomicrobial reaction processes have resulted in the formulation of kinetic models that directly link the rates of microbial respiration and growth to the corresponding thermodynamic driving forces. The overall objective of this project was to verify and calibrate these kinetic models for the microbial reduction of uranium(VI) in geochemical conditions that mimic as much as possible field conditions. The approach combined modeling of bacterial processes using new bioenergetic rate laws, laboratory experiments to determine the bioavailability of uranium during uranium bioreduction, evaluation of microbial growth yield under energy-limited conditions using bioreactor experiments, competitionmore » experiments between metabolic processes in environmentally relevant conditions, and model applications at the field scale. The new kinetic descriptions of microbial U(VI) and Fe(III) reduction should replace those currently used in reactive transport models that couple catabolic energy generation and growth of microbial populations to the rates of biogeochemical redox processes. The above work was carried out in collaboration between the groups of Taillefert (batch reactor experiments and reaction modeling) at Georgia Tech and Van Cappellen (retentostat experiments and reactive transport modeling) at University of Waterloo (Canada).« less

  11. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  12. Using the Bifocal Modeling Framework to Resolve "Discrepant Events" between Physical Experiments and Virtual Models in Biology

    ERIC Educational Resources Information Center

    Blikstein, Paulo; Fuhrmann, Tamar; Salehi, Shima

    2016-01-01

    In this paper, we investigate an approach to supporting students' learning in science through a combination of physical experimentation and virtual modeling. We present a study that utilizes a scientific inquiry framework, which we call "bifocal modeling," to link student-designed experiments and computer models in real time. In this…

  13. Evaluation of annual, global seismicity forecasts, including ensemble models

    NASA Astrophysics Data System (ADS)

    Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner

    2013-04-01

    In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.

  14. CSEP-Japan: The Japanese node of the collaboratory for the study of earthquake predictability

    NASA Astrophysics Data System (ADS)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2011-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project of earthquake predictability research. The final goal of this project is to have a look for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined the CSEP and started the Japanese testing center called as CSEP-Japan. This testing center constitutes an open access to researchers contributing earthquake forecast models for applied to Japan. A total of 91 earthquake forecast models were submitted on the prospective experiment starting from 1 November 2009. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by the CSEP. The experiments of 1-day, 3-month, 1-year and 3-year forecasting classes were implemented for 92 rounds, 4 rounds, 1round and 0 round (now in progress), respectively. The results of the 3-month class gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space-distribution with most models in some cases where many earthquakes occurred at the same spot. Throughout the experiment, it has been clarified that some properties of the CSEP's evaluation tests such as the L-test show strong correlation with the N-test. We are now processing to own (cyber-) infrastructure to support the forecast experiment as follows. (1) Japanese seismicity has changed since the 2011 Tohoku earthquake. The 3rd call for forecasting models was announced in order to promote model improvement for forecasting earthquakes after this earthquake. So, we provide Japanese seismicity catalog maintained by JMA for modelers to study how seismicity changes in Japan. (2) Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. (3) The testing center improved an evaluation system for 1-day class experiment because this testing class required fast calculation ability to finish forecasting and testing results within one day. This development will make a real-time forecasting system come true. (4) The special issue of 1st part titled Earthquake Forecast Testing Experiment in Japan was published on the Earth, Planets and Space Vol. 63, No.3, 2011 on March, 2011. This issue includes papers of algorithm of statistical models participating our experiment and outline of the experiment in Japan. The 2nd part of this issue, which is now on line, will be published soon. In this presentation, we will overview CSEP-Japan and results of the experiments, and discuss direction of our activity. An outline of the experiment and activities of the Japanese Testing Center are published on our WEB site;

  15. U.S. perspective on technology demonstration experiments for adaptive structures

    NASA Technical Reports Server (NTRS)

    Aswani, Mohan; Wada, Ben K.; Garba, John A.

    1991-01-01

    Evaluation of design concepts for adaptive structures is being performed in support of several focused research programs. These include programs such as Precision Segmented Reflector (PSR), Control Structure Interaction (CSI), and the Advanced Space Structures Technology Research Experiment (ASTREX). Although not specifically designed for adaptive structure technology validation, relevant experiments can be performed using the Passive and Active Control of Space Structures (PACOSS) testbed, the Space Integrated Controls Experiment (SPICE), the CSI Evolutionary Model (CEM), and the Dynamic Scale Model Test (DSMT) Hybrid Scale. In addition to the ground test experiments, several space flight experiments have been planned, including a reduced gravity experiment aboard the KC-135 aircraft, shuttle middeck experiments, and the Inexpensive Flight Experiment (INFLEX).

  16. Inborn and experience-dependent models of categorical brain organization. A position paper

    PubMed Central

    Gainotti, Guido

    2015-01-01

    The present review aims to summarize the debate in contemporary neuroscience between inborn and experience-dependent models of conceptual representations that goes back to the description of category-specific semantic disorders for biological and artifact categories. Experience-dependent models suggest that categorical disorders are the by-product of the differential weighting of different sources of knowledge in the representation of biological and artifact categories. These models maintain that semantic disorders are not really category-specific, because they do not respect the boundaries between different categories. They also argue that the brain structures which are disrupted in a given type of category-specific semantic disorder should correspond to the areas of convergence of the sensory-motor information which play a major role in the construction of that category. Furthermore, they provide a simple interpretation of gender-related categorical effects and are supported by studies assessing the importance of prior experience in the cortical representation of objects On the other hand, inborn models maintain that category-specific semantic disorders reflect the disruption of innate brain networks, which are shaped by natural selection to allow rapid identification of objects that are very relevant for survival. From the empirical point of view, these models are mainly supported by observations of blind subjects, which suggest that visual experience is not necessary for the emergence of category-specificity in the ventral stream of visual processing. The weight of the data supporting experience-dependent and inborn models is thoroughly discussed, stressing the fact observations made in blind subjects are still the subject of intense debate. It is concluded that at the present state of knowledge it is not possible to choose between experience-dependent and inborn models of conceptual representations. PMID:25667570

  17. Narrative Constructions for the Organization of Self Experience: Proof of Concept via Embodied Robotics.

    PubMed

    Mealier, Anne-Laure; Pointeau, Gregoire; Mirliaz, Solène; Ogawa, Kenji; Finlayson, Mark; Dominey, Peter F

    2017-01-01

    It has been proposed that starting from meaning that the child derives directly from shared experience with others, adult narrative enriches this meaning and its structure, providing causal links between unseen intentional states and actions. This would require a means for representing meaning from experience-a situation model-and a mechanism that allows information to be extracted from sentences and mapped onto the situation model that has been derived from experience, thus enriching that representation. We present a hypothesis and theory concerning how the language processing infrastructure for grammatical constructions can naturally be extended to narrative constructions to provide a mechanism for using language to enrich meaning derived from physical experience. Toward this aim, the grammatical construction models are augmented with additional structures for representing relations between events across sentences. Simulation results demonstrate proof of concept for how the narrative construction model supports multiple successive levels of meaning creation which allows the system to learn about the intentionality of mental states, and argument substitution which allows extensions to metaphorical language and analogical problem solving. Cross-linguistic validity of the system is demonstrated in Japanese. The narrative construction model is then integrated into the cognitive system of a humanoid robot that provides the memory systems and world-interaction required for representing meaning in a situation model. In this context proof of concept is demonstrated for how the system enriches meaning in the situation model that has been directly derived from experience. In terms of links to empirical data, the model predicts strong usage based effects: that is, that the narrative constructions used by children will be highly correlated with those that they experience. It also relies on the notion of narrative or discourse function words. Both of these are validated in the experimental literature.

  18. Systematic approach to verification and validation: High explosive burn models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph; Scovel, Christina A.

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the samemore » experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.« less

  19. What else are psychotherapy trainees learning? A qualitative model of students' personal experiences based on two populations.

    PubMed

    Pascual-Leone, Antonio; Rodriguez-Rubio, Beatriz; Metler, Samantha

    2013-01-01

    After an introductory course in experiential-integrative psychotherapy, 21 graduate students provided personal narratives of their experiences, which were analyzed using the grounded theory method. Results produced 37 hierarchically organized experiences, revealing that students perceived multiple changes in both professional (i.e., skill acquisition and learning related to the therapeutic process) and personal (i.e., self growth in a more private sphere) domains. Analysis also highlighted key areas of difficulties in training. By adding the personal accounts of graduate trainees, this study enriches and extends Pascual-Leone et al.'s (2012) findings on undergraduates' experiences, raising the number of cases represented in the model to 45. Findings confirm the model of novice trainee experiences while highlighting the unique experiences of undergraduate vs. graduate trainees.

  20. A "Uses and Gratification Expectancy Model" to Predict Students' "Perceived e-Learning Experience"

    ERIC Educational Resources Information Center

    Mondi, Makingu; Woods, Peter; Rafi, Ahmad

    2008-01-01

    This study investigates "how and why" students' "Uses and Gratification Expectancy" (UGE) for e-learning resources influences their "Perceived e-Learning Experience." A "Uses and Gratification Expectancy Model" (UGEM) framework is proposed to predict students' "Perceived e-Learning Experience," and…

  1. Cyber situation awareness: modeling detection of cyber attacks with instance-based learning theory.

    PubMed

    Dutt, Varun; Ahn, Young-Suk; Gonzalez, Cleotilde

    2013-06-01

    To determine the effects of an adversary's behavior on the defender's accurate and timely detection of network threats. Cyber attacks cause major work disruption. It is important to understand how a defender's behavior (experience and tolerance to threats), as well as adversarial behavior (attack strategy), might impact the detection of threats. In this article, we use cognitive modeling to make predictions regarding these factors. Different model types representing a defender, based on Instance-Based Learning Theory (IBLT), faced different adversarial behaviors. A defender's model was defined by experience of threats: threat-prone (90% threats and 10% nonthreats) and nonthreat-prone (10% threats and 90% nonthreats); and different tolerance levels to threats: risk-averse (model declares a cyber attack after perceiving one threat out of eight total) and risk-seeking (model declares a cyber attack after perceiving seven threats out of eight total). Adversarial behavior is simulated by considering different attack strategies: patient (threats occur late) and impatient (threats occur early). For an impatient strategy, risk-averse models with threat-prone experiences show improved detection compared with risk-seeking models with nonthreat-prone experiences; however, the same is not true for a patient strategy. Based upon model predictions, a defender's prior threat experiences and his or her tolerance to threats are likely to predict detection accuracy; but considering the nature of adversarial behavior is also important. Decision-support tools that consider the role of a defender's experience and tolerance to threats along with the nature of adversarial behavior are likely to improve a defender's overall threat detection.

  2. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Justin; Hund, Lauren

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less

  3. Bounds on quantum collapse models from matter-wave interferometry: calculational details

    NASA Astrophysics Data System (ADS)

    Toroš, Marko; Bassi, Angelo

    2018-03-01

    We present a simple derivation of the interference pattern in matter-wave interferometry predicted by a class of quantum master equations. We apply the obtained formulae to the following collapse models: the Ghirardi-Rimini-Weber (GRW) model, the continuous spontaneous localization (CSL) model together with its dissipative (dCSL) and non-Markovian generalizations (cCSL), the quantum mechanics with universal position localization (QMUPL), and the Diósi-Penrose (DP) model. We discuss the separability of the dynamics of the collapse models along the three spatial directions, the validity of the paraxial approximation, and the amplification mechanism. We obtain analytical expressions both in the far field and near field limits. These results agree with those already derived in the Wigner function formalism. We compare the theoretical predictions with the experimental data from two recent matter-wave experiments: the 2012 far-field experiment of Juffmann T et al (2012 Nat. Nanotechnol. 7 297-300) and the 2013 Kapitza-Dirac-Talbot-Lau (KDTL) near-field experiment of Eibenberger et al (2013 Phys. Chem. Chem. Phys. 15 14696-700). We show the region of the parameter space for each collapse model that is excluded by these experiments. We show that matter-wave experiments provide model-insensitive bounds that are valid for a wide family of dissipative and non-Markovian generalizations.

  4. User Preference-Based Dual-Memory Neural Model With Memory Consolidation Approach.

    PubMed

    Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Nasir, Jauwairia; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan

    2018-06-01

    Memory modeling has been a popular topic of research for improving the performance of autonomous agents in cognition related problems. Apart from learning distinct experiences correctly, significant or recurring experiences are expected to be learned better and be retrieved easier. In order to achieve this objective, this paper proposes a user preference-based dual-memory adaptive resonance theory network model, which makes use of a user preference to encode memories with various strengths and to learn and forget at various rates. Over a period of time, memories undergo a consolidation-like process at a rate proportional to the user preference at the time of encoding and the frequency of recall of a particular memory. Consolidated memories are easier to recall and are more stable. This dual-memory neural model generates distinct episodic memories and a flexible semantic-like memory component. This leads to an enhanced retrieval mechanism of experiences through two routes. The simulation results are presented to evaluate the proposed memory model based on various kinds of cues over a number of trials. The experimental results on Mybot are also presented. The results verify that not only are distinct experiences learned correctly but also that experiences associated with higher user preference and recall frequency are consolidated earlier. Thus, these experiences are recalled more easily relative to the unconsolidated experiences.

  5. Concepts for 18/30 GHz satellite communication system, volume 1A: Appendix

    NASA Technical Reports Server (NTRS)

    Jorasch, R.; Baker, M.; Davies, R.; Cuccia, L.; Mitchell, C.

    1979-01-01

    The following are appended: (1) Propagation phenomena and attenuation models; (2) Models and measurements of rainfall patterns in the U.S.; (3) Millimeter wave propagation experiments; (4) Comparison of the theory and Millimeter wave propagation experiments; (4) Comparison of theory and experiment; (5) A practical rain attenuation model for CONUS; (6) Space diversity; (7) Values of attenuation for selected U.S. cities; and (8) Additional considerations.

  6. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  7. Revisiting the horizontal redistribution of water in soils: Experiments and numerical modeling.

    PubMed

    Zhuang, L; Hassanizadeh, S M; Kleingeld, P J; van Genuchten, M Th

    2017-09-01

    A series of experiments and related numerical simulations were carried out to study one-dimensional water redistribution processes in an unsaturated soil. A long horizontal Plexiglas box was packed as homogenously as possible with sand. The sandbox was divided into two sections using a very thin metal plate, with one section initially fully saturated and the other section only partially saturated. Initial saturation in the dry section was set to 0.2, 0.4, or 0.6 in three different experiments. Redistribution between the wet and dry sections started as soon as the metal plate was removed. Changes in water saturation at various locations along the sandbox were measured as a function of time using a dual-energy gamma system. Also, air and water pressures were measured using two different kinds of tensiometers at various locations as a function of time. The saturation discontinuity was found to persist during the entire experiments, while observed water pressures were found to become continuous immediately after the experiments started. Two models, the standard Richards equation and an interfacial area model, were used to simulate the experiments. Both models showed some deviations between the simulated water pressures and the measured data at early times during redistribution. The standard model could only simulate the observed saturation distributions reasonably well for the experiment with the lowest initial water saturation in the dry section. The interfacial area model could reproduce observed saturation distributions of all three experiments, albeit by fitting one of the parameters in the surface area production term.

  8. Accounting for immunoprecipitation efficiencies in the statistical analysis of ChIP-seq data.

    PubMed

    Bao, Yanchun; Vinciotti, Veronica; Wit, Ernst; 't Hoen, Peter A C

    2013-05-30

    ImmunoPrecipitation (IP) efficiencies may vary largely between different antibodies and between repeated experiments with the same antibody. These differences have a large impact on the quality of ChIP-seq data: a more efficient experiment will necessarily lead to a higher signal to background ratio, and therefore to an apparent larger number of enriched regions, compared to a less efficient experiment. In this paper, we show how IP efficiencies can be explicitly accounted for in the joint statistical modelling of ChIP-seq data. We fit a latent mixture model to eight experiments on two proteins, from two laboratories where different antibodies are used for the two proteins. We use the model parameters to estimate the efficiencies of individual experiments, and find that these are clearly different for the different laboratories, and amongst technical replicates from the same lab. When we account for ChIP efficiency, we find more regions bound in the more efficient experiments than in the less efficient ones, at the same false discovery rate. A priori knowledge of the same number of binding sites across experiments can also be included in the model for a more robust detection of differentially bound regions among two different proteins. We propose a statistical model for the detection of enriched and differentially bound regions from multiple ChIP-seq data sets. The framework that we present accounts explicitly for IP efficiencies in ChIP-seq data, and allows to model jointly, rather than individually, replicates and experiments from different proteins, leading to more robust biological conclusions.

  9. Numerical modeling of the simulated gas hydrate production test at Mallik 2L-38 in the pilot scale pressure reservoir LARS - Applying the "foamy oil" model

    NASA Astrophysics Data System (ADS)

    Abendroth, Sven; Thaler, Jan; Klump, Jens; Schicks, Judith; Uddin, Mafiz

    2014-05-01

    In the context of the German joint project SUGAR (Submarine Gas Hydrate Reservoirs: exploration, extraction and transport) we conducted a series of experiments in the LArge Reservoir Simulator (LARS) at the German Research Centre of Geosciences Potsdam. These experiments allow us to investigate the formation and dissociation of hydrates at large scale laboratory conditions. We performed an experiment similar to the field-test conditions of the production test in the Mallik gas hydrate field (Mallik 2L-38) in the Beaufort Mackenzie Delta of the Canadian Arctic. The aim of this experiment was to study the transport behavior of fluids in gas hydrate reservoirs during depressurization (see also Heeschen et al. and Priegnitz et al., this volume). The experimental results from LARS are used to provide details about processes inside the pressure vessel, to validate the models through history matching, and to feed back into the design of future experiments. In experiments in LARS the amount of methane produced from gas hydrates was much lower than expected. Previously published models predict a methane production rate higher than the one observed in experiments and field studies (Uddin et al. 2010; Wright et al. 2011). The authors of the aforementioned studies point out that the current modeling approach overestimates the gas production rate when modeling gas production by depressurization. They suggest that trapping of gas bubbles inside the porous medium is responsible for the reduced gas production rate. They point out that this behavior of multi-phase flow is not well explained by a "residual oil" model, but rather resembles a "foamy oil" model. Our study applies Uddin's (2010) "foamy oil" model and combines it with history matches of our experiments in LARS. Our results indicate a better agreement between experimental and model results when using the "foamy oil" model instead of conventional models of gas flow in water. References Uddin M., Wright J.F. and Coombe D. (2010) - Numerical Study of gas evolution and transport behaviors in natural gas hydrate reservoirs; CSUG/SPE 137439. Wright J.F., Uddin M., Dallimore S.R. and Coombe D. (2011) - Mechanisms of gas evolution and transport in a producing gas hydrate reservoir: an unconventional basis for successful history matching of observed production flow data; International Conference on Gas Hydrates (ICGH 2011).

  10. How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?

    NASA Astrophysics Data System (ADS)

    Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.

    2013-12-01

    In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.

  11. Characteristics of the Nordic Seas overflows in a set of Norwegian Earth System Model experiments

    NASA Astrophysics Data System (ADS)

    Guo, Chuncheng; Ilicak, Mehmet; Bentsen, Mats; Fer, Ilker

    2016-08-01

    Global ocean models with an isopycnic vertical coordinate are advantageous in representing overflows, as they do not suffer from topography-induced spurious numerical mixing commonly seen in geopotential coordinate models. In this paper, we present a quantitative diagnosis of the Nordic Seas overflows in four configurations of the Norwegian Earth System Model (NorESM) family that features an isopycnic ocean model. For intercomparison, two coupled ocean-sea ice and two fully coupled (atmosphere-land-ocean-sea ice) experiments are considered. Each pair consists of a (non-eddying) 1° and a (eddy-permitting) 1/4° horizontal resolution ocean model. In all experiments, overflow waters remain dense and descend to the deep basins, entraining ambient water en route. Results from the 1/4° pair show similar behavior in the overflows, whereas the 1° pair show distinct differences, including temperature/salinity properties, volume transport (Q), and large scale features such as the strength of the Atlantic Meridional Overturning Circulation (AMOC). The volume transport of the overflows and degree of entrainment are underestimated in the 1° experiments, whereas in the 1/4° experiments, there is a two-fold downstream increase in Q, which matches observations well. In contrast to the 1/4° experiments, the coarse 1° experiments do not capture the inclined isopycnals of the overflows or the western boundary current off the Flemish Cap. In all experiments, the pathway of the Iceland-Scotland Overflow Water is misrepresented: a major fraction of the overflow proceeds southward into the West European Basin, instead of turning westward into the Irminger Sea. This discrepancy is attributed to excessive production of Labrador Sea Water in the model. The mean state and variability of the Nordic Seas overflows have significant consequences on the response of the AMOC, hence their correct representations are of vital importance in global ocean and climate modelling.

  12. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  13. Lack of blinding of outcome assessors in animal model experiments implies risk of observer bias.

    PubMed

    Bello, Segun; Krogsbøll, Lasse T; Gruber, Jan; Zhao, Zhizhuang J; Fischer, Doris; Hróbjartsson, Asbjørn

    2014-09-01

    To examine the impact of not blinding outcome assessors on estimates of intervention effects in animal experiments modeling human clinical conditions. We searched PubMed, Biosis, Google Scholar, and HighWire Press and included animal model experiments with both blinded and nonblinded outcome assessors. For each experiment, we calculated the ratio of odds ratios (ROR), that is, the odds ratio (OR) from nonblinded assessments relative to the corresponding OR from blinded assessments. We standardized the ORs according to the experimental hypothesis, such that an ROR <1 indicates that nonblinded assessor exaggerated intervention effect, that is, exaggerated benefit in experiments investigating possible benefit or exaggerated harm in experiments investigating possible harm. We pooled RORs with inverse variance random-effects meta-analysis. We included 10 (2,450 animals) experiments in the main meta-analysis. Outcomes were subjective in most experiments. The pooled ROR was 0.41 (95% confidence interval [CI], 0.20, 0.82; I(2) = 75%; P < 0.001), indicating an average exaggeration of the nonblinded ORs by 59%. The heterogeneity was quantitative and caused by three pesticides experiments with very large observer bias, pooled ROR was 0.20 (95% CI, 0.07, 0.59) in contrast to the pooled ROR in the other seven experiments, 0.82 (95% CI, 0.57, 1.17). Lack of blinding of outcome assessors in animal model experiments with subjective outcomes implies a considerable risk of observer bias. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Prospects for testing Lorentz and CPT symmetry with antiprotons

    NASA Astrophysics Data System (ADS)

    Vargas, Arnaldo J.

    2018-03-01

    A brief overview of the prospects of testing Lorentz and CPT symmetry with antimatter experiments is presented. The models discussed are applicable to atomic spectroscopy experiments, Penning-trap experiments and gravitational tests. Comments about the sensitivity of the most recent antimatter experiments to the models reviewed here are included. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.

  15. Development and application of a 3-D geometry/mass model for LDEF satellite ionizing radiation assessments

    NASA Technical Reports Server (NTRS)

    Colborn, B. L.; Armstong, T. W.

    1993-01-01

    A three-dimensional geometry and mass model of the Long Duration Exposure Facility (LDEF) spacecraft and experiment trays was developed for use in predictions and data interpretation related to ionizing radiation measurements. The modeling approach, level of detail incorporated, example models for specific experiments and radiation dosimeters, and example applications of the model are described.

  16. Stratospheric General Circulation with Chemistry Model (SGCCM)

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.; Douglass, Anne R.; Geller, Marvin A.; Kaye, Jack A.; Nielsen, J. Eric; Rosenfield, Joan E.; Stolarski, Richard S.

    1990-01-01

    In the past two years constituent transport and chemistry experiments have been performed using both simple single constituent models and more complex reservoir species models. Winds for these experiments have been taken from the data assimilation effort, Stratospheric Data Analysis System (STRATAN).

  17. A review of active learning approaches to experimental design for uncovering biological networks

    PubMed Central

    2017-01-01

    Various types of biological knowledge describe networks of interactions among elementary entities. For example, transcriptional regulatory networks consist of interactions among proteins and genes. Current knowledge about the exact structure of such networks is highly incomplete, and laboratory experiments that manipulate the entities involved are conducted to test hypotheses about these networks. In recent years, various automated approaches to experiment selection have been proposed. Many of these approaches can be characterized as active machine learning algorithms. Active learning is an iterative process in which a model is learned from data, hypotheses are generated from the model to propose informative experiments, and the experiments yield new data that is used to update the model. This review describes the various models, experiment selection strategies, validation techniques, and successful applications described in the literature; highlights common themes and notable distinctions among methods; and identifies likely directions of future research and open problems in the area. PMID:28570593

  18. Verification of the modified model of the drying process of a polymer liquid film on a flat substrate by experiment (2): through more accurate experiment

    NASA Astrophysics Data System (ADS)

    Kagami, Hiroyuki

    2006-05-01

    We have proposed and modified a model of drying process of polymer solution coated on a flat substrate for flat polymer film fabrication and have presented the fruits through Photomask Japan 2002, 2003, 2004 and so on. And for example numerical simulation of the model qualitatively reappears a typical thickness profile of the polymer film formed after drying, that is, the profile that the edge of the film is thicker and just the region next to the edge's bump is thinner. Then we have clarified dependence of distribution of polymer molecules on a flat substrate on a various parameters based on analysis of many numerical simulations. Then we done a few kinds of experiments so as to verify the modified model and reported the initial result of them through Photomask Japan 2005. Through the initial result we could observe some results supporting the modified model. But we could not observe a characteristic region of a valley next to the edge's bump of a polymer film after drying because a shape of a solution's film coated on a substrate in the experiment was different from one in resists' coating and drying process or imagined in the modified model. In this study, we improved above difference between experiment and the model and did experiments for verification again with a shape of a solution's film coated on a substrate coincident with one imagined in the modified model and using molar concentration. As a result, some were verified more strongly and some need to be examined again. That is, we could confirm like results of last experiment that the smaller average molecular weight of Metoloses was, the larger the gradient of thickness profile of a polymer thin film was. But we could not observe a depression just inside the edge of the thin film also in this improved experiment. We may be able to enumerate the fact that not an organic solution but an aqueous solution was used in the experiment as the cause of non-formation of the depression.

  19. Diagnostic Analysis of the Three-Dimensional Sulfur Distributions over the Eastern United States Using the CMAQ Model and Measurements from the ICARTT Field Experiment

    EPA Science Inventory

    Previous comparisons of air quality modeling results from various forecast models with aircraft measurements of sulfate aerosol collected during the ICARTT field experiment indicated that models that included detailed treatment of gas- and aqueous-phase atmospheric sulfate format...

  20. Learning Together: The Role of the Online Community in Army Professional Education

    DTIC Science & Technology

    2005-05-26

    Kolb , Experiential Learning : Experience as the Source of Learning and Development... Experiential Learning One model frequently discussed is experiential learning .15 Kolb develops this model through analysis of older models. One of the...observations about the experience. Kolb develops several characteristics of adult learning . Kolb discusses his model of experiential learning

  1. An Overview of the Object Protocol Model (OPM) and the OPM Data Management Tools.

    ERIC Educational Resources Information Center

    Chen, I-Min A.; Markowitz, Victor M.

    1995-01-01

    Discussion of database management tools for scientific information focuses on the Object Protocol Model (OPM) and data management tools based on OPM. Topics include the need for new constructs for modeling scientific experiments, modeling object structures and experiments in OPM, queries and updates, and developing scientific database applications…

  2. How consistent are precipitation patterns predicted by GCMs in the absence of cloud radiative effects?

    NASA Astrophysics Data System (ADS)

    Popke, Dagmar; Bony, Sandrine; Mauritsen, Thorsten; Stevens, Bjorn

    2015-04-01

    Model simulations with state-of-the-art general circulation models reveal a strong disagreement concerning the simulated regional precipitation patterns and their changes with warming. The deviating precipitation response even persists when reducing the model experiment complexity to aquaplanet simulation with forced sea surface temperatures (Stevens and Bony, 2013). To assess feedbacks between clouds and radiation on precipitation responses we analyze data from 5 models performing the aquaplanet simulations of the Clouds On Off Klima Intercomparison Experiment (COOKIE), where the interaction of clouds and radiation is inhibited. Although cloud radiative effects are then disabled, the precipitation patterns among models are as diverse as with cloud radiative effects switched on. Disentangling differing model responses in such simplified experiments thus appears to be key to better understanding the simulated regional precipitation in more standard configurations. By analyzing the local moisture and moist static energy budgets in the COOKIE experiments we investigate likely causes for the disagreement among models. References Stevens, B. & S. Bony: What Are Climate Models Missing?, Science, 2013, 340, 1053-1054

  3. Medical Student Self-Efficacy with Family-Centered Care during Bedside Rounds

    PubMed Central

    Young, Henry N.; Schumacher, Jayna B.; Moreno, Megan A.; Brown, Roger L.; Sigrest, Ted D.; McIntosh, Gwen K.; Schumacher, Daniel J.; Kelly, Michelle M.; Cox, Elizabeth D.

    2012-01-01

    Purpose Factors that support self-efficacy must be understood in order to foster family-centered care (FCC) during rounds. Based on social cognitive theory, this study examined (1) how 3 supportive experiences (observing role models, having mastery experiences, and receiving feedback) influence self-efficacy with FCC during rounds and (2) whether the influence of these supportive experiences was mediated by self-efficacy with 3 key FCC tasks (relationship building, exchanging information, and decision making). Method Researchers surveyed 184 students during pediatric clerkship rotations during the 2008–2011 academic years. Surveys assessed supportive experiences and students’ self-efficacy with FCC during rounds and with key FCC tasks. Measurement models were constructed via exploratory and confirmatory factor analyses. Composite indicator structural equation (CISE) models evaluated whether supportive experiences influenced self-efficacy with FCC during rounds and whether self-efficacy with key FCC tasks mediated any such influences. Results Researchers obtained surveys from 172 eligible students who were 76% (130) White and 53% (91) female. Observing role models and having mastery experiences supported self-efficacy with FCC during rounds (each p<0.01), while receiving feedback did not. Self-efficacy with two specific FCC tasks, relationship building and decision making (each p < 0.05), mediated the effects of these two supportive experiences on self-efficacy with FCC during rounds. Conclusions Observing role models and having mastery experiences foster students’ self-efficacy with FCC during rounds, operating through self-efficacy with key FCC tasks. Results suggest the importance of helping students gain self-efficacy in key FCC tasks before the rounds experience and helping educators implement supportive experiences during rounds. PMID:22534602

  4. Probing eukaryotic cell mechanics via mesoscopic simulations

    NASA Astrophysics Data System (ADS)

    Pivkin, Igor V.; Lykov, Kirill; Nematbakhsh, Yasaman; Shang, Menglin; Lim, Chwee Teck

    2017-11-01

    We developed a new mesoscopic particle based eukaryotic cell model which takes into account cell membrane, cytoskeleton and nucleus. The breast epithelial cells were used in our studies. To estimate the viscoelastic properties of cells and to calibrate the computational model, we performed micropipette aspiration experiments. The model was then validated using data from microfluidic experiments. Using the validated model, we probed contributions of sub-cellular components to whole cell mechanics in micropipette aspiration and microfluidics experiments. We believe that the new model will allow to study in silico numerous problems in the context of cell biomechanics in flows in complex domains, such as capillary networks and microfluidic devices.

  5. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  6. Using models to guide field experiments: a priori predictions for the CO2 response of a nutrient- and water-limited native Eucalypt woodland.

    PubMed

    Medlyn, Belinda E; De Kauwe, Martin G; Zaehle, Sönke; Walker, Anthony P; Duursma, Remko A; Luus, Kristina; Mishurov, Mikhail; Pak, Bernard; Smith, Benjamin; Wang, Ying-Ping; Yang, Xiaojuan; Crous, Kristine Y; Drake, John E; Gimeno, Teresa E; Macdonald, Catriona A; Norby, Richard J; Power, Sally A; Tjoelker, Mark G; Ellsworth, David S

    2016-08-01

    The response of terrestrial ecosystems to rising atmospheric CO2 concentration (Ca ), particularly under nutrient-limited conditions, is a major uncertainty in Earth System models. The Eucalyptus Free-Air CO2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodland presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. We applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experiments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluate data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercomparison. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutrient uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements. © 2016 John Wiley & Sons Ltd.

  7. Using models to guide field experiments: a priori predictions for the CO 2 response of a nutrient- and water-limited native Eucalypt woodland

    DOE PAGES

    Medlyn, Belinda E.; De Kauwe, Martin G.; Zaehle, Sönke; ...

    2016-05-09

    One major uncertainty in Earth System models is the response of terrestrial ecosystems to rising atmospheric CO 2 concentration (Ca), particularly under nutrient-lim- ited conditions. The Eucalyptus Free-Air CO 2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodlands, presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. Moreover, we applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experi- ments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluatemore » data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercompari- son. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutri- ent uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Finally, knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements.« less

  8. Using models to guide field experiments: a priori predictions for the CO 2 response of a nutrient- and water-limited native Eucalypt woodland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medlyn, Belinda E.; De Kauwe, Martin G.; Zaehle, Sönke

    One major uncertainty in Earth System models is the response of terrestrial ecosystems to rising atmospheric CO 2 concentration (Ca), particularly under nutrient-lim- ited conditions. The Eucalyptus Free-Air CO 2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodlands, presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. Moreover, we applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experi- ments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluatemore » data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercompari- son. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutri- ent uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Finally, knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements.« less

  9. Experience, Reflect, Critique: The End of the "Learning Cycles" Era

    ERIC Educational Resources Information Center

    Seaman, Jayson

    2008-01-01

    According to prevailing models, experiential learning is by definition a stepwise process beginning with direct experience, followed by reflection, followed by learning. It has been argued, however, that stepwise models inadequately explain the holistic learning processes that are central to learning from experience, and that they lack scientific…

  10. Guided-Inquiry Experiments for Physical Chemistry: The POGIL-PCL Model

    ERIC Educational Resources Information Center

    Hunnicutt, Sally S.; Grushow, Alexander; Whitnell, Robert

    2015-01-01

    The POGIL-PCL project implements the principles of process-oriented, guided-inquiry learning (POGIL) in order to improve student learning in the physical chemistry laboratory (PCL) course. The inquiry-based physical chemistry experiments being developed emphasize modeling of chemical phenomena. In each experiment, students work through at least…

  11. Retrieving relevant time-course experiments: a study on Arabidopsis microarrays.

    PubMed

    Şener, Duygu Dede; Oğul, Hasan

    2016-06-01

    Understanding time-course regulation of genes in response to a stimulus is a major concern in current systems biology. The problem is usually approached by computational methods to model the gene behaviour or its networked interactions with the others by a set of latent parameters. The model parameters can be estimated through a meta-analysis of available data obtained from other relevant experiments. The key question here is how to find the relevant experiments which are potentially useful in analysing current data. In this study, the authors address this problem in the context of time-course gene expression experiments from an information retrieval perspective. To this end, they introduce a computational framework that takes a time-course experiment as a query and reports a list of relevant experiments retrieved from a given repository. These retrieved experiments can then be used to associate the environmental factors of query experiment with the findings previously reported. The model is tested using a set of time-course Arabidopsis microarrays. The experimental results show that relevant experiments can be successfully retrieved based on content similarity.

  12. Eurodelta-Trends, a Multi-Model Experiment of Air Quality Hindcast in Europe over 1990-2010. Experiment Design and Key Findings

    NASA Astrophysics Data System (ADS)

    Colette, A.; Ciarelli, G.; Otero, N.; Theobald, M.; Solberg, S.; Andersson, C.; Couvidat, F.; Manders-Groot, A.; Mar, K. A.; Mircea, M.; Pay, M. T.; Raffort, V.; Tsyro, S.; Cuvelier, K.; Adani, M.; Bessagnet, B.; Bergstrom, R.; Briganti, G.; Cappelletti, A.; D'isidoro, M.; Fagerli, H.; Ojha, N.; Roustan, Y.; Vivanco, M. G.

    2017-12-01

    The Eurodelta-Trends multi-model chemistry-transport experiment has been designed to better understand the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional scale air quality. The experiment is designed in three tiers with increasing degree of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000 and 2010. Sensitivity analysis for the same three years using various combinations of (i) anthropogenic emissions, (ii) chemical boundary conditions and (iii) meteorology complements it. The most demanding tier consists in two complete time series from 1990 to 2010, simulated using either time varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and six models have completed the 21-year trend simulations. The modelling results are publicly available for further use by the scientific community. We assess the skill of the models in capturing observed air pollution trends for the 1990-2010 time period. The average particulate matter relative trends are well captured by the models, even if they display the usual lower bias in reproducing absolute levels. Ozone trends are also well reproduced, yet slightly overestimated in the 1990s. The attribution study emphasizes the efficiency of mitigation measures in reducing air pollution over Europe, although a strong impact of long range transport is pointed out for ozone trends. Meteorological variability is also an important factor in some regions of Europe. The results of the first health and ecosystem impact studies impacts building upon a regional scale multi-model ensemble over a 20yr time period will also be presented.

  13. Experiment and simulation for CSI: What are the missing links?

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Park, K. C.

    1989-01-01

    Viewgraphs on experiment and simulation for control structure interaction (CSI) are presented. Topics covered include: control structure interaction; typical control/structure interaction system; CSI problem classification; actuator/sensor models; modeling uncertainty; noise models; real-time computations; and discrete versus continuous.

  14. Does trust promote more teamwork? Modeling online game players' teamwork using team experience as a moderator.

    PubMed

    Lee, Chun-Chia; Chang, Jen-Wei

    2013-11-01

    The need for teamwork has grown significantly in today's organizations. Especially for online game communities, teamwork is an important means of online game players' engagement. This study aims to investigate the impacts of trust on players' teamwork with affective commitment and normative commitment as mediators. Furthermore, this research includes team experience as a moderator to compare the difference between different player groups. A model was proposed and tested on 296 online game players' data using structural equation modeling. Findings revealed that team experience moderated the relationship between trust and teamwork. The results indicated that trust promotes more teamwork only for players with high experience through affective commitment than those who with low experience. Implications of the findings are discussed.

  15. Systems-Oriented Workplace Learning Experiences for Early Learners: Three Models.

    PubMed

    O'Brien, Bridget C; Bachhuber, Melissa R; Teherani, Arianne; Iker, Theresa M; Batt, Joanne; O'Sullivan, Patricia S

    2017-05-01

    Early workplace learning experiences may be effective for learning systems-based practice. This study explores systems-oriented workplace learning experiences (SOWLEs) for early learners to suggest a framework for their development. The authors used a two-phase qualitative case study design. In Phase 1 (spring 2014), they prepared case write-ups based on transcribed interviews from 10 SOWLE leaders at the authors' institution and, through comparative analysis of cases, identified three SOWLE models. In Phase 2 (summer 2014), studying seven 8-week SOWLE pilots, the authors used interview and observational data collected from the seven participating medical students, two pharmacy students, and site leaders to construct case write-ups of each pilot and to verify and elaborate the models. In Model 1, students performed specific patient care activities that addressed a system gap. Some site leaders helped students connect the activities to larger systems problems and potential improvements. In Model 2, students participated in predetermined systems improvement (SI) projects, gaining experience in the improvement process. Site leaders had experience in SI and often had significant roles in the projects. In Model 3, students worked with key stakeholders to develop a project and conduct a small test of change. They experienced most elements of an improvement cycle. Site leaders often had experience with SI and knew how to guide and support students' learning. Each model could offer systems-oriented learning opportunities provided that key elements are in place including site leaders facile in SI concepts and able to guide students in SOWLE activities.

  16. The Utility of Gas Gun Experiments in Developing Equations of State

    NASA Astrophysics Data System (ADS)

    Pittman, Emily; Hagelberg, Carl; Ramsey, Scott

    2016-11-01

    Gas gun experiments have the potential to investigate material properties in various well defined shock conditions, making them a valuable research tool for the development of equations of state (EOS) and material response under shock loading. Gas guns have the ability to create shocks for loading to pressures ranging from MPa to GPa. A variety of diagnostics techniques can be used to gather data from gas gun experiments; resulting data from these experiments is applicable to many fields of study. The focus of this set of experiments is the development of data on the Hugoniot for the overdriven products EOS of PBX 9501 to extend data from which current computational EOS models draw. This series of shots was conducted by M-9 using the two-stage gas-guns at LANL and aimed to gather data within the 30-120 GPa pressure regime. The experiment was replicated using FLAG, a Langrangian multiphysics code, using a one-dimensional setup which employs the Wescott Stewart Davis (WSD) reactive burn model. Prior to this series, data did not extend into this higher range, so the new data allowed for the model to be re-evaluated. A comparison of the results to the experimental data reveals that the model is a good fit to the data below 40 GPa. However, the model did not fall within the error bars for pressures above this region. This is an indication that the material models or burn model could be modified to better match the data.

  17. Mental models of adherence: parallels in perceptions, values, and expectations in adherence to prescribed home exercise programs and other personal regimens.

    PubMed

    Rizzo, Jon; Bell, Alexandra

    2018-05-09

    A mental model is the collection of an individual's perceptions, values, and expectations about a particular aspect of their life, which strongly influences behaviors. This study explored orthopedic outpatients mental models of adherence to prescribed home exercise programs and how they related to mental models of adherence to other types of personal regimens. The study followed an interpretive description qualitative design. Data were collected via two semi-structured interviews. Interview One focused on participants prior experiences adhering to personal regimens. Interview Two focused on experiences adhering to their current prescribed home exercise program. Data analysis followed a constant comparative method. Findings revealed similarity in perceptions, values, and expectations that informed individuals mental models of adherence to personal regimens and prescribed home exercise programs. Perceived realized results, expected results, perceived social supports, and value of convenience characterized mental models of adherence. Parallels between mental models of adherence for prescribed home exercise and other personal regimens suggest that patients adherence behavior to prescribed routines may be influenced by adherence experiences in other aspects of their lives. By gaining insight into patients adherence experiences, values, and expectations across life domains, clinicians may tailor supports that enhance home exercise adherence. Implications for Rehabilitation A mental model is the collection of an individual's perceptions, values, and expectations about a particular aspect of their life, which is based on prior experiences and strongly influences behaviors. This study demonstrated similarity in orthopedic outpatients mental models of adherence to prescribed home exercise programs and adherence to personal regimens in other aspects of their lives. Physical therapists should inquire about patients non-medical adherence experiences, as strategies patients customarily use to adhere to other activities may inform strategies to promote prescribed home exercise adherence.

  18. Prospects for testing Lorentz and CPT symmetry with antiprotons.

    PubMed

    Vargas, Arnaldo J

    2018-03-28

    A brief overview of the prospects of testing Lorentz and CPT symmetry with antimatter experiments is presented. The models discussed are applicable to atomic spectroscopy experiments, Penning-trap experiments and gravitational tests. Comments about the sensitivity of the most recent antimatter experiments to the models reviewed here are included.This article is part of the Theo Murphy meeting issue 'Antiproton physics in the ELENA era'. © 2018 The Author(s).

  19. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  20. Negative evaluations of self and others, and peer victimization as mediators of the relationship between childhood adversity and psychotic experiences in adolescence: the moderating role of loneliness.

    PubMed

    Murphy, Siobhan; Murphy, Jamie; Shevlin, Mark

    2015-09-01

    Previous research has identified an association between traumatic experiences and psychotic symptoms. Few studies, however, have explored the underlying mechanisms and contingent nature of these associations in an integrated model. This study aimed to test a moderated mediation model of negative childhood experiences, associated cognitive processes, and psychotic experiences within a context of adolescent loneliness. Cross-sectional survey. A total of 785 Northern Irish secondary school adolescents completed the survey. A moderated mediation model was specified and tested. Childhood experiences of threat and subordination were directly associated with psychotic experiences. Analyses indicated that peer victimization was a mediator of this effect and that loneliness moderated this mediated effect. A new model is proposed to provide an alternative framework for assessing the association between trauma and psychotic experience in adolescence that recognizes loneliness as a significant contextual moderator that can potentially strengthen the trauma-psychosis relationship. Moderated mediation analyses poses an alternative framework to the understanding of trauma-psychosis associations Adolescent loneliness is a vulnerability factor within this association Data are based on a Northern Irish sample with relatively low levels of loneliness Cross-sectional data cannot explore the developmental course of these experiences in adolescence. © 2015 The British Psychological Society.

  1. Estimates of effects of residual acceleration on USML-1 experiments

    NASA Technical Reports Server (NTRS)

    Naumann, Robert J.

    1995-01-01

    The purpose of this study effort was to develop analytical models to describe the effects of residual accelerations on the experiments to be carried on the first U.S. Microgravity Lab mission (USML-1) and to test the accuracy of these models by comparing the pre-flight predicted effects with the post-flight measured effects. After surveying the experiments to be performed on USML-1, it became evident that the anticipated residual accelerations during the USML-1 mission were well below the threshold for most of the primary experiments and all of the secondary (Glovebox) experiments and that the only set of experiments that could provide quantifiable effects, and thus provide a definitive test of the analytical models, were the three melt growth experiments using the Bridgman-Stockbarger type Crystal Growth Furnace (CGF). This class of experiments is by far the most sensitive to low level quasi-steady accelerations that are unavoidable on space craft operating in low earth orbit. Because of this, they have been the drivers for the acceleration requirements imposed on the Space Station. Therefore, it is appropriate that the models on which these requirements are based are tested experimentally. Also, since solidification proceeds directionally over a long period of time, the solidified ingot provides a more or less continuous record of the effects from acceleration disturbances.

  2. Statistical Models for the Analysis and Design of Digital Polymerase Chain Reaction (dPCR) Experiments.

    PubMed

    Dorazio, Robert M; Hunter, Margaret E

    2015-11-03

    Statistical methods for the analysis and design of experiments using digital PCR (dPCR) have received only limited attention and have been misused in many instances. To address this issue and to provide a more general approach to the analysis of dPCR data, we describe a class of statistical models for the analysis and design of experiments that require quantification of nucleic acids. These models are mathematically equivalent to generalized linear models of binomial responses that include a complementary, log-log link function and an offset that is dependent on the dPCR partition volume. These models are both versatile and easy to fit using conventional statistical software. Covariates can be used to specify different sources of variation in nucleic acid concentration, and a model's parameters can be used to quantify the effects of these covariates. For purposes of illustration, we analyzed dPCR data from different types of experiments, including serial dilution, evaluation of copy number variation, and quantification of gene expression. We also showed how these models can be used to help design dPCR experiments, as in selection of sample sizes needed to achieve desired levels of precision in estimates of nucleic acid concentration or to detect differences in concentration among treatments with prescribed levels of statistical power.

  3. Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments

    DOE PAGES

    Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...

    2016-06-13

    We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less

  4. Microwave scattering models and basic experiments

    NASA Technical Reports Server (NTRS)

    Fung, Adrian K.

    1989-01-01

    Progress is summarized which has been made in four areas of study: (1) scattering model development for sparsely populated media, such as a forested area; (2) scattering model development for dense media, such as a sea ice medium or a snow covered terrain; (3) model development for randomly rough surfaces; and (4) design and conduct of basic scattering and attenuation experiments suitable for the verification of theoretical models.

  5. Modeling Enclosure Design in Above-Grade Walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lstiburek, J.; Ueno, K.; Musunuru, S.

    2016-03-01

    Building Science Corporation modeled typically well-performing wall assemblies using Wärme und Feuchte instationär (WUFI) Version 5.3 software and demonstrated that these models agree with historic experience when calibrated and modeled correctly. This technical report provides a library of WUFI modeling input data and results. Within the limits of existing experience, this information can be generalized for applications to a broad population of houses.

  6. Interim Service ISDN Satellite (ISIS) network model for advanced satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.; Hager, E. Paul

    1991-01-01

    The Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) Network Model for Advanced Satellite Designs and Experiments describes a model suitable for discrete event simulations. A top-down model design uses the Advanced Communications Technology Satellite (ACTS) as its basis. The ISDN modeling abstractions are added to permit the determination and performance for the NASA Satellite Communications Research (SCAR) Program.

  7. Semi-Numerical Studies of the Three-Meter Spherical Couette Experiment Utilizing Data Assimilation

    NASA Astrophysics Data System (ADS)

    Burnett, S. C.; Rojas, R.; Perevalov, A.; Lathrop, D. P.

    2017-12-01

    The model of the Earth's magnetic field has been investigated in recent years through experiments and numerical models. At the University of Maryland, experimental studies are implemented in a three-meter spherical Couette device filled with liquid sodium. The inner and outer spheres of this apparatus mimic the planet's inner core and core-mantle boundary, respectively. These experiments incorporate high velocity flows with Reynolds numbers 108. In spherical Couette geometry, the numerical scheme applied to this work features finite difference methods in the radial direction and pseudospectral spherical harmonic transforms elsewhere [Schaeffer, N. G3 (2013)]. Adding to the numerical model, data assimilation integrates the experimental outer-layer magnetic field measurements. This semi-numerical model can then be compared to the experimental results as well as forecasting magnetic field changes. Data assimilation makes it possible to get estimates of internal motions of the three-meter experiment that would otherwise be intrusive or impossible to obtain in experiments or too computationally expensive with a purely numerical code. If we can provide accurate models of the three-meter device, it is possible to attempt to model the geomagnetic field. We gratefully acknowledge the support of NSF Grant No. EAR1417148 & DGE1322106.

  8. Semi-Numerical Studies of the Three-Meter Spherical Couette Experiment Utilizing Data Assimilation

    NASA Astrophysics Data System (ADS)

    Burnett, Sarah; Rojas, Ruben; Perevalov, Artur; Lathrop, Daniel; Ide, Kayo; Schaeffer, Nathanael

    2017-11-01

    The model of the Earth's magnetic field has been investigated in recent years through experiments and numerical models. At the University of Maryland, experimental studies are implemented in a three-meter spherical Couette device filled with liquid sodium. The inner and outer spheres of this apparatus mimic the planet's inner core and core-mantle boundary, respectively. These experiments incorporate high velocity flows with Reynolds numbers 108 . In spherical Couette geometry, the numerical scheme applied to this work features finite difference methods in the radial direction and pseudospectral spherical harmonic transforms elsewhere. Adding to the numerical model, data assimilation integrates the experimental outer-layer magnetic field measurements. This semi-numerical model can then be compared to the experimental results as well as forecasting magnetic field changes. Data assimilation makes it possible to get estimates of internal motions of the three-meter experiment that would otherwise be intrusive or impossible to obtain in experiments or too computationally expensive with a purely numerical code. If we can provide accurate models of the three-meter device, it is possible to attempt to model the geomagnetic field. We gratefully acknowledge the support of NSF Grant No. EAR1417148 & DGE1322106.

  9. Buying and Selling Prices of Investments: Configural Weight Model of Interactions Predicts Violations of Joint Independence.

    PubMed

    Birnbaum; Zimmermann

    1998-05-01

    Judges evaluated buying and selling prices of hypothetical investments, based on the previous price of each investment and estimates of the investment's future value given by advisors of varied expertise. Effect of a source's estimate varied in proportion to the source's expertise, and it varied inversely with the number and expertise of other sources. There was also a configural effect in which the effect of a source's estimate was affected by the rank order of that source's estimate, in relation to other estimates of the same investment. These interactions were fit with a configural weight averaging model in which buyers and sellers place different weights on estimates of different ranks. This model implies that one can design a new experiment in which there will be different violations of joint independence in different viewpoints. Experiment 2 confirmed patterns of violations of joint independence predicted from the model fit in Experiment 1. Experiment 2 also showed that preference reversals between viewpoints can be predicted by the model of Experiment 1. Configural weighting provides a better account of buying and selling prices than either of two models of loss aversion or the theory of anchoring and insufficient adjustment. Copyright 1998 Academic Press.

  10. Stratification established by peeling detrainment from gravity currents: laboratory experiments and models

    NASA Astrophysics Data System (ADS)

    Hogg, Charlie; Dalziel, Stuart; Huppert, Herbert; Imberger, Jorg; Department of Applied Mathematics; Theoretical Physics Team; CentreWater Research Team

    2014-11-01

    Dense gravity currents feed fluid into confined basins in lakes, the oceans and many industrial applications. Existing models of the circulation and mixing in such basins are often based on the currents entraining ambient fluid. However, recent observations have suggested that uni-directional entrainment into a gravity current does not fully describe the mixing in such currents. Laboratory experiments were carried out which visualised peeling detrainment from the gravity current occurring when the ambient fluid was stratified. A theoretical model of the observed peeling detrainment was developed to predict the stratification in the basin. This new model gives a better approximation of the stratification observed in the experiments than the pre-existing entraining model. The model can now be developed such that it integrates into operational models of lakes.

  11. ISMIP6: Ice Sheet Model Intercomparison Project for CMIP6

    NASA Technical Reports Server (NTRS)

    Nowicki, S.

    2015-01-01

    ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6) targets the Cryosphere in a Changing Climate and the Future Sea Level Grand Challenges of the WCRP (World Climate Research Program). Primary goal is to provide future sea level contribution from the Greenland and Antarctic ice sheets, along with associated uncertainty. Secondary goal is to investigate feedback due to dynamic ice sheet models. Experiment design uses and augment the existing CMIP6 (Coupled Model Intercomparison Project Phase 6) DECK (Diagnosis, Evaluation, and Characterization of Klima) experiments. Additonal MIP (Model Intercomparison Project)- specific experiments will be designed for ISM (Ice Sheet Model). Effort builds on the Ice2sea, SeaRISE (Sea-level Response to Ice Sheet Evolution) and COMBINE (Comprehensive Modelling of the Earth System for Better Climate Prediction and Projection) efforts.

  12. Evaluation of a genome-scale in silico metabolic model for Geobacter metallireducens by using proteomic data from a field biostimulation experiment.

    PubMed

    Fang, Yilin; Wilkins, Michael J; Yabusaki, Steven B; Lipton, Mary S; Long, Philip E

    2012-12-01

    Accurately predicting the interactions between microbial metabolism and the physical subsurface environment is necessary to enhance subsurface energy development, soil and groundwater cleanup, and carbon management. This study was an initial attempt to confirm the metabolic functional roles within an in silico model using environmental proteomic data collected during field experiments. Shotgun global proteomics data collected during a subsurface biostimulation experiment were used to validate a genome-scale metabolic model of Geobacter metallireducens-specifically, the ability of the metabolic model to predict metal reduction, biomass yield, and growth rate under dynamic field conditions. The constraint-based in silico model of G. metallireducens relates an annotated genome sequence to the physiological functions with 697 reactions controlled by 747 enzyme-coding genes. Proteomic analysis showed that 180 of the 637 G. metallireducens proteins detected during the 2008 experiment were associated with specific metabolic reactions in the in silico model. When the field-calibrated Fe(III) terminal electron acceptor process reaction in a reactive transport model for the field experiments was replaced with the genome-scale model, the model predicted that the largest metabolic fluxes through the in silico model reactions generally correspond to the highest abundances of proteins that catalyze those reactions. Central metabolism predicted by the model agrees well with protein abundance profiles inferred from proteomic analysis. Model discrepancies with the proteomic data, such as the relatively low abundances of proteins associated with amino acid transport and metabolism, revealed pathways or flux constraints in the in silico model that could be updated to more accurately predict metabolic processes that occur in the subsurface environment.

  13. Making Sense of Complexity with FRE, a Scientific Workflow System for Climate Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Langenhorst, A. R.; Balaji, V.; Yakovlev, A.

    2010-12-01

    A workflow is a description of a sequence of activities that is both precise and comprehensive. Capturing the workflow of climate experiments provides a record which can be queried or compared, and allows reproducibility of the experiments - sometimes even to the bit level of the model output. This reproducibility helps to verify the integrity of the output data, and enables easy perturbation experiments. GFDL's Flexible Modeling System Runtime Environment (FRE) is a production-level software project which defines and implements building blocks of the workflow as command line tools. The scientific, numerical and technical input needed to complete the workflow of an experiment is recorded in an experiment description file in XML format. Several key features add convenience and automation to the FRE workflow: ● Experiment inheritance makes it possible to define a new experiment with only a reference to the parent experiment and the parameters to override. ● Testing is a basic element of the FRE workflow: experiments define short test runs which are verified before the main experiment is run, and a set of standard experiments are verified with new code releases. ● FRE is flexible enough to support short runs with mere megabytes of data, to high-resolution experiments that run on thousands of processors for months, producing terabytes of output data. Experiments run in segments of model time; after each segment, the state is saved and the model can be checkpointed at that level. Segment length is defined by the user, but the number of segments per system job is calculated to fit optimally in the batch scheduler requirements. FRE provides job control across multiple segments, and tools to monitor and alter the state of long-running experiments. ● Experiments are entered into a Curator Database, which stores query-able metadata about the experiment and the experiment's output. ● FRE includes a set of standardized post-processing functions as well as the ability to incorporate user-level functions. FRE post-processing can take us all the way to the preparing of graphical output for a scientific audience, and publication of data on a public portal. ● Recent FRE development includes incorporating a distributed workflow to support remote computing.

  14. Sociality influences cultural complexity.

    PubMed

    Muthukrishna, Michael; Shulman, Ben W; Vasilescu, Vlad; Henrich, Joseph

    2014-01-07

    Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution.

  15. Sociality influences cultural complexity

    PubMed Central

    Muthukrishna, Michael; Shulman, Ben W.; Vasilescu, Vlad; Henrich, Joseph

    2014-01-01

    Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution. PMID:24225461

  16. The structural invariance of the Temporal Experience of Pleasure Scale across time and culture.

    PubMed

    Li, Zhi; Shi, Hai-Song; Elis, Ori; Yang, Zhuo-Ya; Wang, Ya; Lui, Simon S Y; Cheung, Eric F C; Kring, Ann M; Chan, Raymond C K

    2018-06-01

    The Temporal Experience of Pleasure Scale (TEPS) is a self-report instrument that assesses pleasure experience. Initial scale development and validation in the United States yielded a two-factor solution comprising anticipatory and consummatory pleasure. However, a four-factor model that further parsed anticipatory and consummatory pleasure experience into abstract and contextual components was a better model fit in China. In this study, we tested both models using confirmatory factor analysis in an American and a Chinese sample and examined the configural measurement invariance of both models across culture. We also examined the temporal stability of the four-factor model in the Chinese sample. The results indicated that the four-factor model of the TEPS was a better fit than the two-factor model in the Chinese sample. In contrast, both models fit the American sample, which also included many Asian American participants. The four-factor model fit both the Asian American and Chinese samples equally well. Finally, the four-factor model demonstrated good measurement and structural invariance across culture and time, suggesting that this model may be applicable in both cross-cultural and longitudinal studies. © 2018 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  17. Cryogenic Tank Modeling for the Saturn AS-203 Experiment

    NASA Technical Reports Server (NTRS)

    Grayson, Gary D.; Lopez, Alfredo; Chandler, Frank O.; Hastings, Leon J.; Tucker, Stephen P.

    2006-01-01

    A computational fluid dynamics (CFD) model is developed for the Saturn S-IVB liquid hydrogen (LH2) tank to simulate the 1966 AS-203 flight experiment. This significant experiment is the only known, adequately-instrumented, low-gravity, cryogenic self pressurization test that is well suited for CFD model validation. A 4000-cell, axisymmetric model predicts motion of the LH2 surface including boil-off and thermal stratification in the liquid and gas phases. The model is based on a modified version of the commercially available FLOW3D software. During the experiment, heat enters the LH2 tank through the tank forward dome, side wall, aft dome, and common bulkhead. In both model and test the liquid and gases thermally stratify in the low-gravity natural convection environment. LH2 boils at the free surface which in turn increases the pressure within the tank during the 5360 second experiment. The Saturn S-IVB tank model is shown to accurately simulate the self pressurization and thermal stratification in the 1966 AS-203 test. The average predicted pressurization rate is within 4% of the pressure rise rate suggested by test data. Ullage temperature results are also in good agreement with the test where the model predicts an ullage temperature rise rate within 6% of the measured data. The model is based on first principles only and includes no adjustments to bring the predictions closer to the test data. Although quantitative model validation is achieved or one specific case, a significant step is taken towards demonstrating general use of CFD for low-gravity cryogenic fluid modeling.

  18. A quantum probability account of order effects in inference.

    PubMed

    Trueblood, Jennifer S; Busemeyer, Jerome R

    2011-01-01

    Order of information plays a crucial role in the process of updating beliefs across time. In fact, the presence of order effects makes a classical or Bayesian approach to inference difficult. As a result, the existing models of inference, such as the belief-adjustment model, merely provide an ad hoc explanation for these effects. We postulate a quantum inference model for order effects based on the axiomatic principles of quantum probability theory. The quantum inference model explains order effects by transforming a state vector with different sequences of operators for different orderings of information. We demonstrate this process by fitting the quantum model to data collected in a medical diagnostic task and a jury decision-making task. To further test the quantum inference model, a new jury decision-making experiment is developed. Using the results of this experiment, we compare the quantum inference model with two versions of the belief-adjustment model, the adding model and the averaging model. We show that both the quantum model and the adding model provide good fits to the data. To distinguish the quantum model from the adding model, we develop a new experiment involving extreme evidence. The results from this new experiment suggest that the adding model faces limitations when accounting for tasks involving extreme evidence, whereas the quantum inference model does not. Ultimately, we argue that the quantum model provides a more coherent account for order effects that was not possible before. Copyright © 2011 Cognitive Science Society, Inc.

  19. Real-time remote scientific model validation

    NASA Technical Reports Server (NTRS)

    Frainier, Richard; Groleau, Nicolas

    1994-01-01

    This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.

  20. Maximally Expressive Modeling

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth; Richardson, Lea

    2004-01-01

    Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.

  1. Extracting Models in Single Molecule Experiments

    NASA Astrophysics Data System (ADS)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  2. NRL 1989 Beam Propagation Studies in Support of the ATA Multi-Pulse Propagation Experiment

    DTIC Science & Technology

    1990-08-31

    papers presented here were all written prior to the completion of the experiment. The first of these papers presents simulation results which modeled ...beam stability and channel evolution for an entire five pulse burst. The second paper describes a new air chemistry model used in the SARLAC...Experiment: A new air chemistry model for use in the propagation codes simulating the MPPE was developed by making analytic fits to benchmark runs with

  3. Ensemble and Bias-Correction Techniques for Air-Quality Model Forecasts of Surface O3 and PM2.5 during the TEXAQS-II Experiment of 2006

    EPA Science Inventory

    Several air quality forecasting ensembles were created from seven models, running in real-time during the 2006 Texas Air Quality (TEXAQS-II) experiment. These multi-model ensembles incorporated a diverse set of meteorological models, chemical mechanisms, and emission inventories...

  4. Coupling Longitudinal Data and Multilevel Modeling to Examine the Antecedents and Consequences of Jealousy Experiences in Romantic Relationships: A Test of the Relational Turbulence Model

    ERIC Educational Resources Information Center

    Theiss, Jennifer A.; Solomon, Denise Haunani

    2006-01-01

    We used longitudinal data and multilevel modeling to examine how intimacy, relational uncertainty, and failed attempts at interdependence influence emotional, cognitive, and communicative responses to romantic jealousy, and how those experiences shape subsequent relationship characteristics. The relational turbulence model (Solomon & Knobloch,…

  5. Relationship between a solar drying model of red pepper and the kinetics of pure water evaporation (2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passamai, V.; Saravia, L.

    1997-05-01

    In part one, a simple drying model of red pepper related to water evaporation was developed. In this second part the drying model is applied by means of related experiments. Both laboratory and open air drying experiments were carried out to validate the model and simulation results are presented.

  6. Progress on the Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha

    2015-12-01

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.

  7. Integrating Conceptual Knowledge Within and Across Representational Modalities

    PubMed Central

    McNorgan, Chris; Reid, Jackie; McRae, Ken

    2011-01-01

    Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual feature verification task. The pattern of decision latencies across Experiments 1 to 4 is consistent with a deep integration hierarchy. PMID:21093853

  8. Maximally Expressive Modeling of Operations Tasks

    NASA Technical Reports Server (NTRS)

    Jaap, John; Richardson, Lea; Davis, Elizabeth

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.

  9. The early universe as a probe of new physics

    NASA Astrophysics Data System (ADS)

    Bird, Christopher Shane

    The Standard Model of Particle Physics has been verified to unprecedented precision in the last few decades. However there are still phenomena in nature which cannot be explained, and as such new theories will be required. Since terrestrial experiments are limited in both the energy and precision that can be probed, new methods are required to search for signs of physics beyond the Standard Model. In this dissertation, I demonstrate how these theories can be probed by searching for remnants of their effects in the early Universe. In particular I focus on three possible extensions of the Standard Model: the addition of massive neutral particles as dark matter, the addition of charged massive particles, and the existence of higher dimensions. For each new model, I review the existing experimental bounds and the potential for discovering new physics in the next generation of experiments. For dark matter, I introduce six simple models which I have developed, and which involve a minimum amount of new physics, as well as reviewing one existing model of dark matter. For each model I calculate the latest constraints from astrophysics experiments, nuclear recoil experiments, and collider experiments. I also provide motivations for studying sub-GeV mass dark matter, and propose the possibility of searching for light WIMPs in the decay of B-mesons and other heavy particles. For charged massive relics, I introduce and review the recently proposed model of catalyzed Big Bang nucleosynthesis. In particular I review the production of 6Li by this mechanism, and calculate the abundance of 7Li after destruction of 7Be by charged relics. The result is that for certain natural relics CBBN is capable of removing tensions between the predicted and observed 6Li and 7Li abundances which are present in the standard model of BBN. For extra dimensions, I review the constraints on the ADD model from both astrophysics and collider experiments. I then calculate the constraints on this model from Big Bang nucleosynthesis in the early Universe. I also calculate the bounds on this model from Kaluza-Klein gravitons trapped in the galaxy which decay to electron-positron pairs, using the measured 511 keV gamma-ray flux. For each example of new physics, I find that remnants of the early Universe provide constraints on the models which are complementary to the existing constraints from colliders and other terrestrial experiments.

  10. Model-based high-throughput design of ion exchange protein chromatography.

    PubMed

    Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo

    2016-08-12

    This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Maximally Expressive Task Modeling

    NASA Technical Reports Server (NTRS)

    Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.

  12. Perceptual Tests of an Algorithm for Musical Key-Finding

    ERIC Educational Resources Information Center

    Schmuckler, Mark A.; Tomovski, Robert

    2005-01-01

    Perceiving the tonality of a musical passage is a fundamental aspect of the experience of hearing music. Models for determining tonality have thus occupied a central place in music cognition research. Three experiments investigated 1 well-known model of tonal determination: the Krumhansl-Schmuckler key-finding algorithm. In Experiment 1,…

  13. Naturalness, privacy, and restorative experiences in wilderness: An integrative model

    Treesearch

    William E. Hammitt

    2012-01-01

    It is suggested that the wilderness experience is a restorative experience that results from the interconnectivity between naturalness/ remoteness and privacy/unconfinement and the four components essential for an environment to be restorative. A model-framework is offered to illustrate the linkages among the environmental, social, and restoration components of...

  14. Nuclear Experiments You Can Do...from Edison.

    ERIC Educational Resources Information Center

    Benrey, Ronald M.

    This booklet discusses some of the basic facts about nuclear energy and provides eight experiments related to these facts. The experiments (which include lists of materials needed and procedures used) involve: (1) an oil-drop model of a splitting atom; (2) a domino model of a chain reaction; (3) observing radioactivity with an electroscope; (4)…

  15. Feasibility study of ITS model experiment plan in Japan : Vehicle, Road and Traffic Intelligent Society (VERTIS)

    DOT National Transportation Integrated Search

    2000-11-09

    In 1997, the 5 ITS ministries and the agency committee laid out the ITS model experiment plan as a strategy for the Comprehensive ITS Plan, and decided to conduct a feasibility study (FS) of the experiment plan in the 3 years from 1997 to 1999. : The...

  16. The Impact of Sexual Orientation on Women's Midlife Experience: A Transition Model Approach

    ERIC Educational Resources Information Center

    Boyer, Carol Anderson

    2007-01-01

    Sexual orientation is an integral part of identity affecting every stage of an individual's development. This literature review examines women's cultural experiences based on sexual orientation and their effect on midlife experience. A developmental model is offered that incorporates sexual orientation as a contextual factor in this developmental…

  17. Modeling Valuations from Experience: A Comment on Ashby and Rakow (2014)

    ERIC Educational Resources Information Center

    Wulff, Dirk U.; Pachur, Thorsten

    2016-01-01

    What are the cognitive mechanisms underlying subjective valuations formed on the basis of sequential experiences of an option's possible outcomes? Ashby and Rakow (2014) have proposed a sliding window model (SWIM), according to which people's valuations represent the average of a limited sample of recent experiences (the size of which is estimated…

  18. Career Development of Women in Academia: Traversing the Leaky Pipeline

    ERIC Educational Resources Information Center

    Gasser, Courtney E.; Shaffer, Katharine S.

    2014-01-01

    Women's experiences in academia are laden with a fundamental set of issues pertaining to gender inequalities. A model reflecting women's career development and experiences around their academic pipeline (or career in academia) is presented. This model further conveys a new perspective on the experiences of women academicians before, during and…

  19. Inquiry Based-Computational Experiment, Acquisition of Threshold Concepts and Argumentation in Science and Mathematics Education

    ERIC Educational Resources Information Center

    Psycharis, Sarantos

    2016-01-01

    Computational experiment approach considers models as the fundamental instructional units of Inquiry Based Science and Mathematics Education (IBSE) and STEM Education, where the model take the place of the "classical" experimental set-up and simulation replaces the experiment. Argumentation in IBSE and STEM education is related to the…

  20. Self-expansion and flow in couples' momentary experiences: an experience sampling study.

    PubMed

    Graham, James M

    2008-09-01

    The self-expansion model of close relationships posits that when couples engage in exciting and activating conjoint activities, they feel connected with their partners and more satisfied with their relationships. In the present study, the experience sampling method was used to examine the predictions of the self-expansion model in couples' momentary experiences. In addition, the author generated several new hypotheses by integrating the self-expansion model with existing research on flow. Over the course of 1 week, 20 couples were signaled at quasi-random intervals to provide data on 1,265 unique experiences. The results suggest that the level of activation experienced during an activity was positively related to experience-level relationship quality. This relationship was consistent across free-time and nonfree-time contexts and was mediated by positive affect. Activation was not found to predict later affect unless the level of activation exceeded what was typical for the individual. Also examined was the influence of interpersonal context and activity type on self-expansion. The results support the self-expansion model and suggest that it could be considered under the broader umbrella of flow.

  1. Development of an Implantable WBAN Path-Loss Model for Capsule Endoscopy

    NASA Astrophysics Data System (ADS)

    Aoyagi, Takahiro; Takizawa, Kenichi; Kobayashi, Takehiko; Takada, Jun-Ichi; Hamaguchi, Kiyoshi; Kohno, Ryuji

    An implantable WBAN path-loss model for a capsule endoscopy which is used for examining digestive organs, is developed by conducting simulations and experiments. First, we performed FDTD simulations on implant WBAN propagation by using a numerical human model. Second, we performed FDTD simulations on a vessel that represents the human body. Third, we performed experiments using a vessel of the same dimensions as that used in the simulations. On the basis of the results of these simulations and experiments, we proposed the gradient and intercept parameters of the simple path-loss in-body propagation model.

  2. Modeling plant interspecific interactions from experiments with perennial crop mixtures to predict optimal combinations.

    PubMed

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-12-01

    The contribution of plant species richness to productivity and ecosystem functioning is a longstanding issue in ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modeling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modeled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e., a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficients- from, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modeling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. © 2017 by the Ecological Society of America.

  3. Determinants of conflict detection: a model of risk judgments in air traffic control.

    PubMed

    Stankovic, Stéphanie; Raufaste, Eric; Averty, Philippe

    2008-02-01

    A model of conflict judgments in air traffic control (ATC) is proposed. Three horizontal distances determine risk judgments about conflict between two aircraft: (a) Dt(o) is the distance between the crossing of the aircraft trajectories and the first aircraft to reach that point; (b) Dt(h) is the distance between the two aircraft when they are horizontally closest; and (c) Dt(v) is the horizontal distance between the two aircraft when their growing vertical distance reaches 1000 feet. Two experiments tested whether the variables in the model reflect what controllers do. In Experiment 1, 125 certified controllers provided risk judgments about situations in which the model variables were manipulated. Experiment 2 investigated the relationship between the model and expertise by comparing a population of certified controllers with a population of ATC students. Across both experiments, the model accounted for 44% to 50% of the variance in risk judgments by certified controllers (N=161) but only 20% in judgments by ATC students (N=88). There were major individual differences in the predictive power of the model as well as in the contributions of the three variables. In Experiment 2, the model described experts better than novices. The model provided a satisfying account of the data, albeit with substantial individual differences. It is argued that an individual-differences approach is required when investigating the strategies involved in conflict judgment in ATC. These findings should have implications for developing user-friendly interfaces with conflict detection devices and for devising ATC training programs.

  4. The landscape model: A model for exploring trade-offs between agricultural production and the environment.

    PubMed

    Coleman, Kevin; Muhammed, Shibu E; Milne, Alice E; Todman, Lindsay C; Dailey, A Gordon; Glendining, Margaret J; Whitmore, Andrew P

    2017-12-31

    We describe a model framework that simulates spatial and temporal interactions in agricultural landscapes and that can be used to explore trade-offs between production and environment so helping to determine solutions to the problems of sustainable food production. Here we focus on models of agricultural production, water movement and nutrient flow in a landscape. We validate these models against data from two long-term experiments, (the first a continuous wheat experiment and the other a permanent grass-land experiment) and an experiment where water and nutrient flow are measured from isolated catchments. The model simulated wheat yield (RMSE 20.3-28.6%), grain N (RMSE 21.3-42.5%) and P (RMSE 20.2-29% excluding the nil N plots), and total soil organic carbon particularly well (RMSE3.1-13.8%), the simulations of water flow were also reasonable (RMSE 180.36 and 226.02%). We illustrate the use of our model framework to explore trade-offs between production and nutrient losses. Copyright © 2017 Rothamsted Research. Published by Elsevier B.V. All rights reserved.

  5. International Collaboration on Spent Fuel Disposition in Crystalline Media: FY17 Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yifeng; Hadgu, Teklu; Kainina, Elena

    Active participation in international R&D is crucial for achieving the Spent Fuel Waste Science & Technology (SFWST) long-term goals of conducting “experiments to fill data needs and confirm advanced modeling approaches” and of having a “robust modeling and experimental basis for evaluation of multiple disposal system options” (by 2020). DOE’s Office of Nuclear Energy (NE) has developed a strategic plan to advance cooperation with international partners. The international collaboration on the evaluation of crystalline disposal media at Sandia National Laboratories (SNL) in FY17 focused on the collaboration through the Development of Coupled Models and their Validation against Experiments (DECOVALEX-2019) project.more » The DECOVALEX project is an international research and model comparison collaboration, initiated in 1992, for advancing the understanding and modeling of coupled thermo-hydro-mechanical-chemical (THMC) processes in geological systems. SNL has been participating in three tasks of the DECOVALEX project: Task A. Modeling gas injection experiments (ENGINEER), Task C. Modeling groundwater recovery experiment in tunnel (GREET), and Task F. Fluid inclusion and movement in the tight rock (FINITO).« less

  6. A three-dimensional finite element model of near-field scanning microwave microscopy

    NASA Astrophysics Data System (ADS)

    Balusek, Curtis; Friedman, Barry; Luna, Darwin; Oetiker, Brian; Babajanyan, Arsen; Lee, Kiejin

    2012-10-01

    A three-dimensional finite element model of an experimental near-field scanning microwave microscope (NSMM) has been developed and compared to experiment on non conducting samples. The microwave reflection coefficient S11 is calculated as a function of frequency with no adjustable parameters. There is qualitative agreement with experiment in that the resonant frequency can show a sizable increase with sample dielectric constant; a result that is not obtained with a two-dimensional model. The most realistic model shows a semi-quantitative agreement with experiment. The effect of different sample thicknesses and varying tip sample distances is investigated numerically and shown to effect NSMM performance in a way consistent with experiment. Visualization of the electric field indicates that the field is primarily determined by the shape of the coupling hooks.

  7. Moisture balance over the Iberian Peninsula computed using a high resolution regional climate model. The impact of 3DVAR data assimilation.

    NASA Astrophysics Data System (ADS)

    González-Rojí, Santos J.; Sáenz, Jon; Ibarra-Berastegi, Gabriel

    2016-04-01

    A numerical downscaling exercise over the Iberian Peninsula has been run nesting the WRF model inside ERA Interim. The Iberian Peninsula has been covered by a 15km x 15 km grid with 51 vertical levels. Two model configurations have been tested in two experiments spanning the period 2010-2014 after a one year spin-up (2009). In both cases, the model uses high resolution daily-varying SST fields and the Noah land surface model. In the first experiment (N), after the model is initialised, boundary conditions drive the model, as usual in numerical downscaling experiments. The second experiment (D) is configured the same way as the N case, but 3DVAR data assimilation is run every six hours (00Z, 06Z, 12Z and 18Z) using observations obtained from the PREPBUFR dataset (NCEP ADP Global Upper Air and Surface Weather Observations) using a 120' window around analysis times. For the data assimilation experiment (D), seasonally (monthly) varying background error covariance matrices have been prepared according to the parameterisations used and the mesoscale model domain. For both N and D runs, the moisture balance of the model runs has been evaluated over the Iberian Peninsula, both internally according to the model results (moisture balance in the model) and also in terms of the observed moisture fields from observational datasets (particularly precipitable water and precipitation from observations). Verification has been performed both at the daily and monthly time scales. The verification has also been performed for ERA Interim, the driving coarse-scale dataset used to drive the regional model too. Results show that the leading terms that must be considered over the area are the tendency in the precipitable water column, the divergence of moisture flux, evaporation (computed from latent heat flux at the surface) and precipitation. In the case of ERA Interim, the divergence of Qc is also relevant, although still a minor player in the moisture balance. Both mesoscale model runs are more effective at closing the moisture balance over the whole Iberian Peninsula than ERA Interim. The N experiment (no data assimilation) shows a better closure than the D case, as could be expected from the lack of analysis increments in it. This result is robust both at the daily and monthly time scales. Both ERA Interim and the D experiment produce a negative residual in the balance equation (compatible with excess evaporation or increased convergence of moisture over the Iberian Peninsula). This is a result of the data assimilation process in the D dataset, since in the N experiment the residual is mainly positive. The seasonal cycle of evaporation is much closer in the D experiment to the one in ERA Interim than in the N case, with a higher evaporation during summer months. However, both regional climate model runs show a lower evaporation rate than ERA Interim, particularly during summer months.

  8. Multidimensional analysis of data obtained in experiments with X-ray emulsion chambers and extensive air showers

    NASA Technical Reports Server (NTRS)

    Chilingaryan, A. A.; Galfayan, S. K.; Zazyan, M. Z.; Dunaevsky, A. M.

    1985-01-01

    Nonparametric statistical methods are used to carry out the quantitative comparison of the model and the experimental data. The same methods enable one to select the events initiated by the heavy nuclei and to calculate the portion of the corresponding events. For this purpose it is necessary to have the data on artificial events describing the experiment sufficiently well established. At present, the model with the small scaling violation in the fragmentation region is the closest to the experiments. Therefore, the treatment of gamma families obtained in the Pamir' experiment is being carried out at present with the application of these models.

  9. Definition of common support equipment and space station interface requirements for IOC model technology experiments

    NASA Technical Reports Server (NTRS)

    Russell, Richard A.; Waiss, Richard D.

    1988-01-01

    A study was conducted to identify the common support equipment and Space Station interface requirements for the IOC (initial operating capabilities) model technology experiments. In particular, each principal investigator for the proposed model technology experiment was contacted and visited for technical understanding and support for the generation of the detailed technical backup data required for completion of this study. Based on the data generated, a strong case can be made for a dedicated technology experiment command and control work station consisting of a command keyboard, cathode ray tube, data processing and storage, and an alert/annunciator panel located in the pressurized laboratory.

  10. A sensor fusion field experiment in forest ecosystem dynamics

    NASA Technical Reports Server (NTRS)

    Smith, James A.; Ranson, K. Jon; Williams, Darrel L.; Levine, Elissa R.; Goltz, Stewart M.

    1990-01-01

    The background of the Forest Ecosystem Dynamics field campaign is presented, a progress report on the analysis of the collected data and related modeling activities is provided, and plans for future experiments at different points in the phenological cycle are outlined. The ecological overview of the study site is presented, and attention is focused on forest stands, needles, and atmospheric measurements. Sensor deployment and thermal and microwave observations are discussed, along with two examples of the optical radiation measurements obtained during the experiment in support of radiative transfer modeling. Future activities pertaining to an archival system, synthetic aperture radar, carbon acquisition modeling, and upcoming field experiments are considered.

  11. Modeling the Test-Retest Statistics of a Localization Experiment in the Full Horizontal Plane.

    PubMed

    Morsnowski, André; Maune, Steffen

    2016-10-01

    Two approaches to model the test-retest statistics of a localization experiment basing on Gaussian distribution and on surrogate data are introduced. Their efficiency is investigated using different measures describing directional hearing ability. A localization experiment in the full horizontal plane is a challenging task for hearing impaired patients. In clinical routine, we use this experiment to evaluate the progress of our cochlear implant (CI) recipients. Listening and time effort limit the reproducibility. The localization experiment consists of a 12 loudspeaker circle, placed in an anechoic room, a "camera silens". In darkness, HSM sentences are presented at 65 dB pseudo-erratically from all 12 directions with five repetitions. This experiment is modeled by a set of Gaussian distributions with different standard deviations added to a perfect estimator, as well as by surrogate data. Five repetitions per direction are used to produce surrogate data distributions for the sensation directions. To investigate the statistics, we retrospectively use the data of 33 CI patients with 92 pairs of test-retest-measurements from the same day. The first model does not take inversions into account, (i.e., permutations of the direction from back to front and vice versa are not considered), although they are common for hearing impaired persons particularly in the rear hemisphere. The second model considers these inversions but does not work with all measures. The introduced models successfully describe test-retest statistics of directional hearing. However, since their applications on the investigated measures perform differently no general recommendation can be provided. The presented test-retest statistics enable pair test comparisons for localization experiments.

  12. Experimente ueber den Einflusse von Metaboliten und Antimetaboliten am Modell von Trichomonas Vaginalis. IX. Mitteilung: Experimente mit Enzymen (Experiments on the Influence of Metabolites and Antimetabolites on the Model of Trichomonas Vaginalis. IX. Communication: Experiments on Enzymes),

    DTIC Science & Technology

    some enzymes to trichomonas. The following enzymes were used for experiment: pepsin, trypsin, distaste, urease and lysozyme. Tests were performed...obtained in the experiments with urease . Trichomonas growth under addition of lysozyme was within the range of the control cultures. (Modified author abstract)

  13. Scale-up of ecological experiments: Density variation in the mobile bivalve Macomona liliana

    USGS Publications Warehouse

    Schneider, Davod C.; Walters, R.; Thrush, S.; Dayton, P.

    1997-01-01

    At present the problem of scaling up from controlled experiments (necessarily at a small spatial scale) to questions of regional or global importance is perhaps the most pressing issue in ecology. Most of the proposed techniques recommend iterative cycling between theory and experiment. We present a graphical technique that facilitates this cycling by allowing the scope of experiments, surveys, and natural history observations to be compared to the scope of models and theory. We apply the scope analysis to the problem of understanding the population dynamics of a bivalve exposed to environmental stress at the scale of a harbour. Previous lab and field experiments were found not to be 1:1 scale models of harbour-wide processes. Scope analysis allowed small scale experiments to be linked to larger scale surveys and to a spatially explicit model of population dynamics.

  14. Hypersonic Wind Tunnel Calibration Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Rhode, Matthew N.; DeLoach, Richard

    2005-01-01

    A calibration of a hypersonic wind tunnel has been conducted using formal experiment design techniques and response surface modeling. Data from a compact, highly efficient experiment was used to create a regression model of the pitot pressure as a function of the facility operating conditions as well as the longitudinal location within the test section. The new calibration utilized far fewer design points than prior experiments, but covered a wider range of the facility s operating envelope while revealing interactions between factors not captured in previous calibrations. A series of points chosen randomly within the design space was used to verify the accuracy of the response model. The development of the experiment design is discussed along with tactics used in the execution of the experiment to defend against systematic variation in the results. Trends in the data are illustrated, and comparisons are made to earlier findings.

  15. Modeling of detachment experiments at DIII-D

    DOE PAGES

    Canik, John M.; Briesemeister, Alexis R.; Lasnier, C. J.; ...

    2014-11-26

    Edge fluid–plasma/kinetic–neutral modeling of well-diagnosed DIII-D experiments is performed in order to document in detail how well certain aspects of experimental measurements are reproduced within the model as the transition to detachment is approached. Results indicate, that at high densities near detachment onset, the poloidal temperature profile produced in the simulations agrees well with that measured in experiment. However, matching the heat flux in the model requires a significant increase in the radiated power compared to what is predicted using standard chemical sputtering rates. Lastly, these results suggest that the model is adequate to predict the divertor temperature, provided thatmore » the discrepancy in radiated power level can be resolved.« less

  16. Model fitting data from syllogistic reasoning experiments.

    PubMed

    Hattori, Masasi

    2016-12-01

    The data presented in this article are related to the research article entitled "Probabilistic representation in syllogistic reasoning: A theory to integrate mental models and heuristics" (M. Hattori, 2016) [1]. This article presents predicted data by three signature probabilistic models of syllogistic reasoning and model fitting results for each of a total of 12 experiments ( N =404) in the literature. Models are implemented in R, and their source code is also provided.

  17. Validation of design procedure and performance modeling of a heat and fluid transport field experiment in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Nir, A.; Doughty, C.; Tsang, C. F.

    Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.

  18. Strawman payload data for science and applications space platforms

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The need for a free flying science and applications space platform to host compatible long duration experiment groupings in Earth orbit is discussed. Experiment level information on strawman payload models is presented which serves to identify and quantify the requirements for the space platform system. A description data base on the strawman payload model is presented along with experiment level and group level summaries. Payloads identified in the strawman model include the disciplines of resources observations and environmental observations.

  19. The use of experimental design to find the operating maximum power point of PEM fuel cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crăciunescu, Aurelian; Pătularu, Laurenţiu; Ciumbulea, Gloria

    2015-03-10

    Proton Exchange Membrane (PEM) Fuel Cells are difficult to model due to their complex nonlinear nature. In this paper, the development of a PEM Fuel Cells mathematical model based on the Design of Experiment methodology is described. The Design of Experiment provides a very efficient methodology to obtain a mathematical model for the studied multivariable system with only a few experiments. The obtained results can be used for optimization and control of the PEM Fuel Cells systems.

  20. Controlled experiments for dense gas diffusion: Experimental design and execution, model comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egami, R.; Bowen, J.; Coulombe, W.

    1995-07-01

    An experimental baseline CO2 release experiment at the DOE Spill Test Facility on the Nevada Test Site in Southern Nevada is described. This experiment was unique in its use of CO2 as a surrogate gas representative of a variety of specific chemicals. Introductory discussion places the experiment in historical perspective. CO2 was selected as a surrogate gas to provide a data base suitable for evaluation of model scenarios involving a variety of specific dense gases. The experiment design and setup are described, including design rationale and quality assurance methods employed. Resulting experimental data are summarized. Data usefulness is examined throughmore » a preliminary comparison of experimental results with simulations performed using the SLAV and DEGADIS dense gas models.« less

  1. Research on Equivalent Tests of Dynamics of On-orbit Soft Contact Technology Based on On-Orbit Experiment Data

    NASA Astrophysics Data System (ADS)

    Yang, F.; Dong, Z. H.; Ye, X.

    2018-05-01

    Currently, space robots have been become a very important means of space on-orbit maintenance and support. Many countries are taking deep research and experiment on this. Because space operation attitude is very complicated, it is difficult to model them in research lab. This paper builds up a complete equivalent experiment framework according to the requirement of proposed space soft-contact technology. Also, this paper carries out flexible multi-body dynamics parameters verification for on-orbit soft-contact mechanism, which combines on-orbit experiment data, the built soft-contact mechanism equivalent model and flexible multi-body dynamics equivalent model that is based on KANE equation. The experiment results approve the correctness of the built on-orbit soft-contact flexible multi-body dynamics.

  2. A Comparison between Predicted and Observed Atmospheric States and their Effects on Infrasonic Source Time Function Inversion at Source Physics Experiment 6

    NASA Astrophysics Data System (ADS)

    Aur, K. A.; Poppeliers, C.; Preston, L. A.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  3. Experiences with two-equation turbulence models

    NASA Technical Reports Server (NTRS)

    Singhal, Ashok K.; Lai, Yong G.; Avva, Ram K.

    1995-01-01

    This viewgraph presentation discusses the following: introduction to CFD Research Corporation; experiences with two-equation models - models used, numerical difficulties, validation and applications, and strengths and weaknesses; and answers to three questions posed by the workshop organizing committee - what are your customers telling you, what are you doing in-house, and how can NASA-CMOTT (Center for Modeling of Turbulence and Transition) help.

  4. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1973-01-01

    Studies are reported of the long term responses of the model atmosphere to anomalies in snow cover and sea surface temperature. An abstract of a previously issued report on the computed response to surface anomalies in a global atmospheric model is presented, and the experiments on the effects of transient sea surface temperature on the Mintz-Arakawa atmospheric model are reported.

  5. Educational and training aspects of new surgical techniques: experience with the endoscopic–laparoscopic interdisciplinary training entity (ELITE) model in training for a natural orifice translumenal endoscopic surgery (NOTES) approach to appendectomy.

    PubMed

    Gillen, Sonja; Gröne, Jörn; Knödgen, Fritz; Wolf, Petra; Meyer, Michael; Friess, Helmut; Buhr, Heinz-Johannes; Ritz, Jörg-Peter; Feussner, Hubertus; Lehmann, Kai S

    2012-08-01

    Natural orifice translumenal endoscopic surgery (NOTES) is a new surgical concept that requires training before it is introduced into clinical practice. The endoscopic–laparoscopic interdisciplinary training entity (ELITE) is a training model for NOTES interventions. The latest research has concentrated on new materials for organs with realistic optical and haptic characteristics and the possibility of high-frequency dissection. This study aimed to assess both the ELITE model in a surgical training course and the construct validity of a newly developed NOTES appendectomy scenario. The 70 attendees of the 2010 Practical Course for Visceral Surgery (Warnemuende, Germany) took part in the study and performed a NOTES appendectomy via a transsigmoidal access. The primary end point was the total time required for the appendectomy, including retrieval of the appendix. Subjective evaluation of the model was performed using a questionnaire. Subgroups were analyzed according to laparoscopic and endoscopic experience. The participants with endoscopic or laparoscopic experience completed the task significantly faster than the inexperienced participants (p = 0.009 and 0.019, respectively). Endoscopic experience was the strongest influencing factor, whereas laparoscopic experience had limited impact on the participants with previous endoscopic experience. As shown by the findings, 87.3% of the participants stated that the ELITE model was suitable for the NOTES training scenario, and 88.7% found the newly developed model anatomically realistic. This study was able to establish face and construct validity for the ELITE model with a large group of surgeons. The ELITE model seems to be well suited for the training of NOTES as a new surgical technique in an established gastrointestinal surgery skills course.

  6. NEW DEVELOPMENT IN DISPERSION EXPERIMENTS AND MODELS FOR THE CONVECTIVE BOUNDARY LAYER

    EPA Science Inventory

    We present recent experiments and modeling studies of dispersion in the convective boundary layer (CBL) with focus on highly-buoyant plumes that "loft" near the CBL top and resist downward mixing. Such plumes have been a significant problem in earlier dispersion models; they a...

  7. The College Mathematics Experience and Changes in Majors: A Structural Model Analysis.

    ERIC Educational Resources Information Center

    Whiteley, Meredith A.; Fenske, Robert H.

    1990-01-01

    Testing of a structural equation model with college mathematics experience as the focal variable in 745 students' final decisions concerning major or dropping out over 4 years of college yielded separate model estimates for 3 fields: scientific/technical, quantitative business, and business management majors. (Author/MSE)

  8. Modelling and Simulation as a Recognizing Method in Education

    ERIC Educational Resources Information Center

    Stoffa, Veronika

    2004-01-01

    Computer animation-simulation models of complex processes and events, which are the method of instruction, can be an effective didactic device. Gaining deeper knowledge about objects modelled helps to plan simulation experiments oriented on processes and events researched. Animation experiments realized on multimedia computers can aid easier…

  9. Designing Informal Learning Experiences for Early Career Academics Using a Knowledge Ecosystem Model

    ERIC Educational Resources Information Center

    Miller, Faye; Partridge, Helen; Bruce, Christine; Hemmings, Brian

    2017-01-01

    This article presents a "knowledge ecosystem" model of how early career academics experience using information to learn while building their social networks for developmental purposes. Developed using grounded theory methodology, the model offers a way of conceptualising how to empower early career academics through (1) agency…

  10. A Classroom-Field Model of Inter-Ethnic Communication.

    ERIC Educational Resources Information Center

    Nielsen, Keith E.

    The BLBC (bilingual-bicultural) model of inter-ethnic communication is an effective method for bridging the instructional "gap" between classroom education and field experiences. These two learning experiences are distinct; yet each should complement the other. The BLBC model of inter-ethnic communication attempts to interface the student's…

  11. Student Teachers' Team Teaching during Field Experiences: An Evaluation by Their Mentors

    ERIC Educational Resources Information Center

    Simons, Mathea; Baeten, Marlies

    2016-01-01

    Since collaboration within schools gains importance, teacher educators are looking for alternative models of field experience inspired by collaborative learning. Team teaching is such a model. This study explores two team teaching models (parallel and sequential teaching) by investigating the mentors' perspective. Semi-structured interviews were…

  12. Toward a general psychological model of tension and suspense

    PubMed Central

    Lehne, Moritz; Koelsch, Stefan

    2015-01-01

    Tension and suspense are powerful emotional experiences that occur in a wide variety of contexts (e.g., in music, film, literature, and everyday life). The omnipresence of tension and suspense suggests that they build on very basic cognitive and affective mechanisms. However, the psychological underpinnings of tension experiences remain largely unexplained, and tension and suspense are rarely discussed from a general, domain-independent perspective. In this paper, we argue that tension experiences in different contexts (e.g., musical tension or suspense in a movie) build on the same underlying psychological processes. We discuss key components of tension experiences and propose a domain-independent model of tension and suspense. According to this model, tension experiences originate from states of conflict, instability, dissonance, or uncertainty that trigger predictive processes directed at future events of emotional significance. We also discuss possible neural mechanisms underlying tension and suspense. The model provides a theoretical framework that can inform future empirical research on tension phenomena. PMID:25717309

  13. Toward a general psychological model of tension and suspense.

    PubMed

    Lehne, Moritz; Koelsch, Stefan

    2015-01-01

    Tension and suspense are powerful emotional experiences that occur in a wide variety of contexts (e.g., in music, film, literature, and everyday life). The omnipresence of tension and suspense suggests that they build on very basic cognitive and affective mechanisms. However, the psychological underpinnings of tension experiences remain largely unexplained, and tension and suspense are rarely discussed from a general, domain-independent perspective. In this paper, we argue that tension experiences in different contexts (e.g., musical tension or suspense in a movie) build on the same underlying psychological processes. We discuss key components of tension experiences and propose a domain-independent model of tension and suspense. According to this model, tension experiences originate from states of conflict, instability, dissonance, or uncertainty that trigger predictive processes directed at future events of emotional significance. We also discuss possible neural mechanisms underlying tension and suspense. The model provides a theoretical framework that can inform future empirical research on tension phenomena.

  14. A model to forecast data centre infrastructure costs.

    NASA Astrophysics Data System (ADS)

    Vernet, R.

    2015-12-01

    The computing needs in the HEP community are increasing steadily, but the current funding situation in many countries is tight. As a consequence experiments, data centres, and funding agencies have to rationalize resource usage and expenditures. CC-IN2P3 (Lyon, France) provides computing resources to many experiments including LHC, and is a major partner for astroparticle projects like LSST, CTA or Euclid. The financial cost to accommodate all these experiments is substantial and has to be planned well in advance for funding and strategic reasons. In that perspective, leveraging infrastructure expenses, electric power cost and hardware performance observed in our site over the last years, we have built a model that integrates these data and provides estimates of the investments that would be required to cater to the experiments for the mid-term future. We present how our model is built and the expenditure forecast it produces, taking into account the experiment roadmaps. We also examine the resource growth predicted by our model over the next years assuming a flat-budget scenario.

  15. Modeling of grain size strengthening in tantalum at high pressures and strain rates

    DOE PAGES

    Rudd, Robert E.; Park, H. -S.; Cavallo, R. M.; ...

    2017-01-01

    Laser-driven ramp wave compression experiments have been used to investigate the strength (flow stress) of tantalum and other metals at high pressures and high strain rates. Recently this kind of experiment has been used to assess the dependence of the strength on the average grain size of the material, finding no detectable variation with grain size. The insensitivity to grain size has been understood theoretically to result from the dominant effect of the high dislocation density generated at the extremely high strain rates of the experiment. Here we review the experiments and describe in detail the multiscale strength model usedmore » to simulate them. The multiscale strength model has been extended to include the effect of geometrically necessary dislocations generated at the grain boundaries during compatible plastic flow in the polycrystalline metal. Lastly, we use the extended model to make predictions of the threshold strain rates and grain sizes below which grain size strengthening would be observed in the laser-driven Rayleigh-Taylor experiments.« less

  16. Technical note: Coordination and harmonization of the multi-scale, multi-model activities HTAP2, AQMEII3, and MICS-Asia3: simulations, emission inventories, boundary conditions, and model output formats.

    PubMed

    Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank

    2017-01-31

    We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.

  17. Technical note: Coordination and harmonization of the multi-scale, multi-model activities HTAP2, AQMEII3, and MICS-Asia3: simulations, emission inventories, boundary conditions, and model output formats

    NASA Astrophysics Data System (ADS)

    Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank

    2017-01-01

    We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.

  18. Technical note: Coordination and harmonization of the multi-scale, multi-model activities HTAP2, AQMEII3, and MICS-Asia3: simulations, emission inventories, boundary conditions, and model output formats

    PubMed Central

    Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank

    2018-01-01

    We present an overview of the coordinated global numerical modelling experiments performed during 2012–2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue. PMID:29541091

  19. Context and the leadership experiences and perceptions of professionals: a review of the nursing profession.

    PubMed

    Jefferson, Therese; Klass, Des; Lord, Linley; Nowak, Margaret; Thomas, Gail

    2014-01-01

    Leadership studies which focus on categorising leadership styles have been critiqued for failure to consider the lived experience of leadership. The purpose of this paper is to use the framework of Jepson's model of contextual dynamics to explore whether this framework assists understanding of the "how and why" of lived leadership experience within the nursing profession. Themes for a purposeful literature search and review, having regard to the Jepson model, are drawn from the contemporary and dynamic context of nursing. Government reports, coupled with preliminary interviews with a nurseleadership team, guided selection of contextual issues. The contextual interactions arising from managerialism, existing hierarchical models of leadership and increasing knowledge work provided insights into leadership experience in nursing, in the contexts of professional identity and changing educational and generational profiles of nurses. The authors conclude that employing a contextual frame provides insights in studying leadership experience. The author propose additions to the cultural and institutional dimensions of Jepson's model. The findings have implications for structuring and communicating key roles and policies relevant to nursing leadership. These include the need to: address perceptions around the legitimacy of current nursing leaders to provide clinical leadership; modify hierarchical models of nursing leadership; address implications of the role of the knowledge workers. Observing nursing leadership through the lens of Jepson's model of contextual dynamics confirms that this is an important way of exploring how leadership is enacted. The authors found, however, the model also provided a useful frame for considering the experience and understanding of leadership by those to be led.

  20. Role Models and Teachers: medical students perception of teaching-learning methods in clinical settings, a qualitative study from Sri Lanka.

    PubMed

    Jayasuriya-Illesinghe, Vathsala; Nazeer, Ishra; Athauda, Lathika; Perera, Jennifer

    2016-02-09

    Medical education research in general, and those focusing on clinical settings in particular, have been a low priority in South Asia. This explorative study from 3 medical schools in Sri Lanka, a South Asian country, describes undergraduate medical students' experiences during their final year clinical training with the aim of understanding the teaching-learning experiences. Using qualitative methods we conducted an exploratory study. Twenty eight graduates from 3 medical schools participated in individual interviews. Interview recordings were transcribed verbatim and analyzed using qualitative content analysis method. Emergent themes reveled 2 types of teaching-learning experiences, role modeling, and purposive teaching. In role modelling, students were expected to observe teachers while they conduct their clinical work, however, this method failed to create positive learning experiences. The clinical teachers who predominantly used this method appeared to be 'figurative' role models and were not perceived as modelling professional behaviors. In contrast, purposeful teaching allowed dedicated time for teacher-student interactions and teachers who created these learning experiences were more likely to be seen as 'true' role models. Students' responses and reciprocations to these interactions were influenced by their perception of teachers' behaviors, attitudes, and the type of teaching-learning situations created for them. Making a distinction between role modeling and purposeful teaching is important for students in clinical training settings. Clinical teachers' awareness of their own manifest professional characterizes, attitudes, and behaviors, could help create better teaching-learning experiences. Moreover, broader systemic reforms are needed to address the prevailing culture of teaching by humiliation and subordination.

  1. The Development of a Conceptual Model of Student Satisfaction with Their Experience in Higher Education

    ERIC Educational Resources Information Center

    Douglas, Jacqueline; McClelland, Robert; Davies, John

    2008-01-01

    Purpose: The purpose of this paper is to introduce a conceptual model of student satisfaction with their higher education (HE) experience, based on the identification of the variable determinants of student perceived quality and the impact of those variables on student satisfaction and/or dissatisfaction with the overall student experience. The…

  2. The impact of parametrized convection on cloud feedback.

    PubMed

    Webb, Mark J; Lock, Adrian P; Bretherton, Christopher S; Bony, Sandrine; Cole, Jason N S; Idelkadi, Abderrahmane; Kang, Sarah M; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D; Zhao, Ming

    2015-11-13

    We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that 'ConvOff' models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud feedback is discussed. © 2015 The Authors.

  3. Experimental design for three interrelated marine ice sheet and ocean model intercomparison projects: MISMIP v. 3 (MISMIP +), ISOMIP v. 2 (ISOMIP +) and MISOMIP v. 1 (MISOMIP1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asay-Davis, Xylar S.; Cornford, Stephen L.; Durand, Gaël

    Coupled ice sheet-ocean models capable of simulating moving grounding lines are just becoming available. Such models have a broad range of potential applications in studying the dynamics of marine ice sheets and tidewater glaciers, from process studies to future projections of ice mass loss and sea level rise. The Marine Ice Sheet-Ocean Model Intercomparison Project (MISOMIP) is a community effort aimed at designing and coordinating a series of model intercomparison projects (MIPs) for model evaluation in idealized setups, model verification based on observations, and future projections for key regions of the West Antarctic Ice Sheet (WAIS). Here we describe computationalmore » experiments constituting three interrelated MIPs for marine ice sheet models and regional ocean circulation models incorporating ice shelf cavities. These consist of ice sheet experiments under the Marine Ice Sheet MIP third phase (MISMIP+), ocean experiments under the Ice Shelf-Ocean MIP second phase (ISOMIP+) and coupled ice sheet-ocean experiments under the MISOMIP first phase (MISOMIP1). All three MIPs use a shared domain with idealized bedrock topography and forcing, allowing the coupled simulations (MISOMIP1) to be compared directly to the individual component simulations (MISMIP+ and ISOMIP+). The experiments, which have qualitative similarities to Pine Island Glacier Ice Shelf and the adjacent region of the Amundsen Sea, are designed to explore the effects of changes in ocean conditions, specifically the temperature at depth, on basal melting and ice dynamics. In future work, differences between model results will form the basis for the evaluation of the participating models.« less

  4. A multi-scale cardiovascular system model can account for the load-dependence of the end-systolic pressure-volume relationship

    PubMed Central

    2013-01-01

    Background The end-systolic pressure-volume relationship is often considered as a load-independent property of the heart and, for this reason, is widely used as an index of ventricular contractility. However, many criticisms have been expressed against this index and the underlying time-varying elastance theory: first, it does not consider the phenomena underlying contraction and second, the end-systolic pressure volume relationship has been experimentally shown to be load-dependent. Methods In place of the time-varying elastance theory, a microscopic model of sarcomere contraction is used to infer the pressure generated by the contraction of the left ventricle, considered as a spherical assembling of sarcomere units. The left ventricle model is inserted into a closed-loop model of the cardiovascular system. Finally, parameters of the modified cardiovascular system model are identified to reproduce the hemodynamics of a normal dog. Results Experiments that have proven the limitations of the time-varying elastance theory are reproduced with our model: (1) preload reductions, (2) afterload increases, (3) the same experiments with increased ventricular contractility, (4) isovolumic contractions and (5) flow-clamps. All experiments simulated with the model generate different end-systolic pressure-volume relationships, showing that this relationship is actually load-dependent. Furthermore, we show that the results of our simulations are in good agreement with experiments. Conclusions We implemented a multi-scale model of the cardiovascular system, in which ventricular contraction is described by a detailed sarcomere model. Using this model, we successfully reproduced a number of experiments that have shown the failing points of the time-varying elastance theory. In particular, the developed multi-scale model of the cardiovascular system can capture the load-dependence of the end-systolic pressure-volume relationship. PMID:23363818

  5. The impact of parametrized convection on cloud feedback

    PubMed Central

    Webb, Mark J.; Lock, Adrian P.; Bretherton, Christopher S.; Bony, Sandrine; Cole, Jason N. S.; Idelkadi, Abderrahmane; Kang, Sarah M.; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C.; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D.; Zhao, Ming

    2015-01-01

    We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that ‘ConvOff’ models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud feedback is discussed. PMID:26438278

  6. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast Testing Experiment in Japan was published on the Earth, Planets and Space Vol. 63, No.3, 2011 on March, 2011. The 2nd part of this issue, which is now on line, will be published soon. An outline of the experiment and activities of the Japanese Testing Center are published on our WEB site; http://wwweic.eri.u-tokyo.ac.jp/ZISINyosoku/wiki.en/wiki.cgi

  7. Probabilistic cross-link analysis and experiment planning for high-throughput elucidation of protein structure.

    PubMed

    Ye, Xiaoduan; O'Neil, Patrick K; Foster, Adrienne N; Gajda, Michal J; Kosinski, Jan; Kurowski, Michal A; Bujnicki, Janusz M; Friedman, Alan M; Bailey-Kellogg, Chris

    2004-12-01

    Emerging high-throughput techniques for the characterization of protein and protein-complex structures yield noisy data with sparse information content, placing a significant burden on computation to properly interpret the experimental data. One such technique uses cross-linking (chemical or by cysteine oxidation) to confirm or select among proposed structural models (e.g., from fold recognition, ab initio prediction, or docking) by testing the consistency between cross-linking data and model geometry. This paper develops a probabilistic framework for analyzing the information content in cross-linking experiments, accounting for anticipated experimental error. This framework supports a mechanism for planning experiments to optimize the information gained. We evaluate potential experiment plans using explicit trade-offs among key properties of practical importance: discriminability, coverage, balance, ambiguity, and cost. We devise a greedy algorithm that considers those properties and, from a large number of combinatorial possibilities, rapidly selects sets of experiments expected to discriminate pairs of models efficiently. In an application to residue-specific chemical cross-linking, we demonstrate the ability of our approach to plan experiments effectively involving combinations of cross-linkers and introduced mutations. We also describe an experiment plan for the bacteriophage lambda Tfa chaperone protein in which we plan dicysteine mutants for discriminating threading models by disulfide formation. Preliminary results from a subset of the planned experiments are consistent and demonstrate the practicality of planning. Our methods provide the experimenter with a valuable tool (available from the authors) for understanding and optimizing cross-linking experiments.

  8. An optoelectric professional's training model based on Unity of Knowing and Doing theory

    NASA Astrophysics Data System (ADS)

    Qin, Shiqiao; Wu, Wei; Zheng, Jiaxing; Wang, Xingshu; Zhao, Yingwei

    2017-08-01

    The "Unity of Knowing and Doing" (UKD) theory is proposed by an ancient Chinese philosopher, Wang Shouren, in 1508, which explains how to unify knowledge and practice. Different from the Chinese traditional UKD theory, the international higher education usually treats knowledge and practice as independent, and puts more emphasis on knowledge. Oriented from the UKD theory, the College of Opto-electric Science and Engineering (COESE) at National University of Defense Technology (NUDT) explores a novel training model in cultivating opto-electric professionals from the aspects of classroom teaching, practice experiment, system experiment, design experiment, research experiment and innovation experiment (CPSDRI). This model aims at promoting the unity of knowledge and practice, takes how to improve the students' capability as the main concern and tries to enhance the progress from cognition to professional action competence. It contains two hierarchies: cognition (CPS) and action competence (DRI). In the cognition hierarchy, students will focus on learning and mastering the professional knowledge of optics, opto-electric technology, laser, computer, electronics and machine through classroom teaching, practice experiment and system experiment (CPS). Great attention will be paid to case teaching, which links knowledge with practice. In the action competence hierarchy, emphasis will be placed on promoting students' capability of using knowledge to solve practical problems through design experiment, research experiment and innovation experiment (DRI). In this model, knowledge is divided into different modules and capability is cultivated on different levels. It combines classroom teaching and experimental teaching in a synergetic way and unifies cognition and practice, which is a valuable reference to the opto-electric undergraduate professionals' cultivation.

  9. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE PAGES

    Garitte, B.; Shao, H.; Wang, X. R.; ...

    2017-01-09

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  10. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garitte, B.; Shao, H.; Wang, X. R.

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  11. Transport Experiments

    NASA Technical Reports Server (NTRS)

    Hall, Timothy M.; Wuebbles, Donald J.; Boering, Kristie A.; Eckman, Richard S.; Lerner, Jean; Plumb, R. Alan; Rind, David H.; Rinsland, Curtis P.; Waugh, Darryn W.; Wei, Chu-Feng

    1999-01-01

    MM II defined a series of experiments to better understand and characterize model transport and to assess the realism of this transport by comparison to observations. Measurements from aircraft, balloon, and satellite, not yet available at the time of MM I [Prather and Remsberg, 1993], provide new and stringent constraints on model transport, and address the limits of our transport modeling abilities. Simulations of the idealized tracers the age spectrum, and propagating boundary conditions, and conserved HSCT-like emissions probe the relative roles of different model transport mechanisms, while simulations of SF6 and C02 make the connection to observations. Some of the tracers are related, and transport diagnostics such as the mean age can be derived from more than one of the experiments for comparison to observations. The goals of the transport experiments are: (1) To isolate the effects of transport in models from other processes; (2) To assess model transport for realistic tracers (such as SF6 and C02) for comparison to observations; (3) To use certain idealized tracers to isolate model mechanisms and relationships to atmospheric chemical perturbations; (4) To identify strengths and weaknesses of the treatment of transport processes in the models; (5) To relate evaluated shortcomings to aspects of model formulation. The following section are included:Executive Summary, Introduction, Age Spectrum, Observation, Tropical Transport in Models, Global Mean Age in Models, Source-Transport Covariance, HSCT "ANOY" Tracer Distributions, and Summary and Conclusions.

  12. Scintillation light from cosmic-ray muons in liquid argon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whittington, Denver Wade; Mufson, S.; Howard, B.

    2016-05-01

    This paper reports the results of an experiment to directly measure the time-resolved scintillation signal from the passage of cosmic-ray muons through liquid argon. Scintillation light from these muons is of value to studies of weakly-interacting particles in neutrino experiments and dark matter searches. The experiment was carried out at the TallBo dewar facility at Fermilab using prototype light guide detectors and electronics developed for the Deep Underground Neutrino Experiment. Two models are presented for the time structure of the scintillation light, a phenomenological model and a physically-motivated model. Both models find tT = 1:52 ms for the decay timemore » constant of the Ar 2 triplet state. These models also show that the identification of the “early” light fraction in the phenomenological model, FE 25% of the signal, with the total light from singlet decays is an underestimate. The total fraction of singlet light is FS 36%, where the increase over FE is from singlet light emitted by the wavelength shifter through processes with long decay constants. The models were further used to compute the experimental particle identification parameter Fprompt, the fraction of light coming in a short time window after the trigger compared with the light in the total recorded waveform. The models reproduce quite well the typical experimental value 0.3 found by dark matter and double b-decay experiments, which suggests this parameter provides a robust metric for discriminating electrons and muons from more heavily ionizing particles.« less

  13. Constraints on the rheology of the partially molten mantle from numerical models of laboratory experiments

    NASA Astrophysics Data System (ADS)

    Rudge, J. F.; Alisic Jewell, L.; Rhebergen, S.; Katz, R. F.; Wells, G. N.

    2015-12-01

    One of the fundamental components in any dynamical model of melt transport is the rheology of partially molten rock. This rheology is poorly understood, and one way in which a better understanding can be obtained is by comparing the results of laboratory deformation experiments to numerical models. Here we present a comparison between numerical models and the laboratory setup of Qi et al. 2013 (EPSL), where a cylinder of partially molten rock containing rigid spherical inclusions was placed under torsion. We have replicated this setup in a finite element model which solves the partial differential equations describing the mechanical process of compaction. These computationally-demanding 3D simulations are only possible due to the recent development of a new preconditioning method for the equations of magma dynamics. The experiments show a distinct pattern of melt-rich and melt-depleted regions around the inclusions. In our numerical models, the pattern of melt varies with key rheological parameters, such as the ratio of bulk to shear viscosity, and the porosity- and strain-rate-dependence of the shear viscosity. These observed melt patterns therefore have the potential to constrain rheological properties. While there are many similarities between the experiments and the numerical models, there are also important differences, which highlight the need for better models of the physics of two-phase mantle/magma dynamics. In particular, the laboratory experiments display more pervasive melt-rich bands than is seen in our numerics.

  14. Radiative transfer model validations during the First ISLSCP Field Experiment

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine

    1990-01-01

    Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.

  15. Reflective equilibrium and empirical data: third person moral experiences in empirical medical ethics.

    PubMed

    De Vries, Martine; Van Leeuwen, Evert

    2010-11-01

    In ethics, the use of empirical data has become more and more popular, leading to a distinct form of applied ethics, namely empirical ethics. This 'empirical turn' is especially visible in bioethics. There are various ways of combining empirical research and ethical reflection. In this paper we discuss the use of empirical data in a special form of Reflective Equilibrium (RE), namely the Network Model with Third Person Moral Experiences. In this model, the empirical data consist of the moral experiences of people in a practice. Although inclusion of these moral experiences in this specific model of RE can be well defended, their use in the application of the model still raises important questions. What precisely are moral experiences? How to determine relevance of experiences, in other words: should there be a selection of the moral experiences that are eventually used in the RE? How much weight should the empirical data have in the RE? And the key question: can the use of RE by empirical ethicists really produce answers to practical moral questions? In this paper we start to answer the above questions by giving examples taken from our research project on understanding the norm of informed consent in the field of pediatric oncology. We especially emphasize that incorporation of empirical data in a network model can reduce the risk of self-justification and bias and can increase the credibility of the RE reached. © 2009 Blackwell Publishing Ltd.

  16. Statistical models for the analysis and design of digital polymerase chain (dPCR) experiments

    USGS Publications Warehouse

    Dorazio, Robert; Hunter, Margaret

    2015-01-01

    Statistical methods for the analysis and design of experiments using digital PCR (dPCR) have received only limited attention and have been misused in many instances. To address this issue and to provide a more general approach to the analysis of dPCR data, we describe a class of statistical models for the analysis and design of experiments that require quantification of nucleic acids. These models are mathematically equivalent to generalized linear models of binomial responses that include a complementary, log–log link function and an offset that is dependent on the dPCR partition volume. These models are both versatile and easy to fit using conventional statistical software. Covariates can be used to specify different sources of variation in nucleic acid concentration, and a model’s parameters can be used to quantify the effects of these covariates. For purposes of illustration, we analyzed dPCR data from different types of experiments, including serial dilution, evaluation of copy number variation, and quantification of gene expression. We also showed how these models can be used to help design dPCR experiments, as in selection of sample sizes needed to achieve desired levels of precision in estimates of nucleic acid concentration or to detect differences in concentration among treatments with prescribed levels of statistical power.

  17. Corvid caching: Insights from a cognitive model.

    PubMed

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K

    2011-07-01

    Caching and recovery of food by corvids is well-studied, but some ambiguous results remain. To help clarify these, we built a computational cognitive model. It is inspired by similar models built for humans, and it assumes that memory strength depends on frequency and recency of use. We compared our model's behavior to that of real birds in previously published experiments. Our model successfully replicated the outcomes of two experiments on recovery behavior and two experiments on cache site choice. Our "virtual birds" reproduced declines in recovery accuracy across sessions, revisits to previously emptied cache sites, a lack of correlation between caching and recovery order, and a preference for caching in safe locations. The model also produced two new explanations. First, that Clark's nutcrackers may become less accurate as recovery progresses not because of differential memory for different cache sites, as was once assumed, but because of chance effects. And second, that Western scrub jays may choose their cache sites not on the basis of negative recovery experiences only, as was previously thought, but on the basis of positive recovery experiences instead. Alternatively, both "punishment" and "reward" may be playing a role. We conclude with a set of new insights, a testable prediction, and directions for future work. PsycINFO Database Record (c) 2011 APA, all rights reserved

  18. Validation of Individual-Based Markov-Like Stochastic Process Model of Insect Behavior and a “Virtual Farm” Concept for Enhancement of Site-Specific IPM

    PubMed Central

    Lux, Slawomir A.; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin

    2016-01-01

    The paper reports application of a Markov-like stochastic process agent-based model and a “virtual farm” concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a “bottom-up ethological” approach and emulates behavior of the “primary IPM actors”—large cohorts of individual insects—within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the “virtual farm” approach—were discussed. PMID:27602000

  19. Validation of Individual-Based Markov-Like Stochastic Process Model of Insect Behavior and a "Virtual Farm" Concept for Enhancement of Site-Specific IPM.

    PubMed

    Lux, Slawomir A; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin

    2016-01-01

    The paper reports application of a Markov-like stochastic process agent-based model and a "virtual farm" concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a "bottom-up ethological" approach and emulates behavior of the "primary IPM actors"-large cohorts of individual insects-within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the "virtual farm" approach-were discussed.

  20. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  1. Earthquake forecasting test for Kanto district to reduce vulnerability of urban mega earthquake disasters

    NASA Astrophysics Data System (ADS)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2012-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP-Japan experiments as a starting model of non-divided column in a depth. In the presentation, we will discuss the performance of the models comparing results of the Kanto district with those obtained in all over Japan by CSEP-Japan and also add to discuss the results of the 3-month experiments after the 2011 Tohoku earthquake to understand the learning ability of the models associated with recent seismicity of the area.

  2. The role of first impression in operant learning.

    PubMed

    Shteingart, Hanan; Neiman, Tal; Loewenstein, Yonatan

    2013-05-01

    We quantified the effect of first experience on behavior in operant learning and studied its underlying computational principles. To that goal, we analyzed more than 200,000 choices in a repeated-choice experiment. We found that the outcome of the first experience has a substantial and lasting effect on participants' subsequent behavior, which we term outcome primacy. We found that this outcome primacy can account for much of the underweighting of rare events, where participants apparently underestimate small probabilities. We modeled behavior in this task using a standard, model-free reinforcement learning algorithm. In this model, the values of the different actions are learned over time and are used to determine the next action according to a predefined action-selection rule. We used a novel nonparametric method to characterize this action-selection rule and showed that the substantial effect of first experience on behavior is consistent with the reinforcement learning model if we assume that the outcome of first experience resets the values of the experienced actions, but not if we assume arbitrary initial conditions. Moreover, the predictive power of our resetting model outperforms previously published models regarding the aggregate choice behavior. These findings suggest that first experience has a disproportionately large effect on subsequent actions, similar to primacy effects in other fields of cognitive psychology. The mechanism of resetting of the initial conditions that underlies outcome primacy may thus also account for other forms of primacy. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  3. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.

  4. Progress on the FabrIc for Frontier Experiments project at Fermilab

    DOE PAGES

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; ...

    2015-12-23

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercialmore » cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. Hence, the progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide« less

  5. Double Stimulation in the Waiting Experiment with Collectives: Testing a Vygotskian Model of the Emergence of Volitional Action.

    PubMed

    Sannino, Annalisa

    2016-03-01

    This study explores what human conduct looks like when research embraces uncertainty and distance itself from the dominant methodological demands of control and predictability. The context is the waiting experiment originally designed in Kurt Lewin's research group, discussed by Vygotsky as an instance among a range of experiments related to his notion of double stimulation. Little attention has been paid to this experiment, despite its great heuristic potential for charting the terrain of uncertainty and agency in experimental settings. Behind the notion of double stimulation lays Vygotsky's distinctive view of human beings' ability to intentionally shape their actions. Accordingly, human beings in situations of uncertainty and cognitive incongruity can rely on artifacts which serve the function of auxiliary motives and which help them undertake volitional actions. A double stimulation model depicting how such actions emerge is tested in a waiting experiment conducted with collectives, in contrast with a previous waiting experiment conducted with individuals. The model, validated in the waiting experiment with individual participants, applies only to a limited extent to the collectives. The analysis shows the extent to which double stimulation takes place in the waiting experiment with collectives, the differences between the two experiments, and what implications can be drawn for an expanded view on experiments.

  6. Chain Pooling modeling selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    As many as three iterated statistical model deletion procedures were considered for an experiment. Population model coefficients were chosen to simulate a saturated 2 to the 4th power experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation, approximately 65 percent, (2) a strategy to be sued in anticipation of small coefficients of variation, 4 percent or less, and (3) a security regret strategy to be used in the absence of such prior knowledge.

  7. Estimation of Crop Gross Primary Production (GPP): I. Impact of MODIS Observation Footprint and Impact of Vegetation BRDF Characteristics

    NASA Technical Reports Server (NTRS)

    Zhang, Qingyuan; Cheng, Yen-Ben; Lyapustin, Alexei I.; Wang, Yujie; Xiao, Xiangming; Suyker, Andrew; Verma, Shashi; Tan, Bin; Middleton, Elizabeth M.

    2014-01-01

    Accurate estimation of gross primary production (GPP) is essential for carbon cycle and climate change studies. Three AmeriFlux crop sites of maize and soybean were selected for this study. Two of the sites were irrigated and the other one was rainfed. The normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), the green band chlorophyll index (CIgreen), and the green band wide dynamic range vegetation index (WDRVIgreen) were computed from the moderate resolution imaging spectroradiometer (MODIS) surface reflectance data. We examined the impacts of the MODIS observation footprint and the vegetation bidirectional reflectance distribution function (BRDF) on crop daily GPP estimation with the four spectral vegetation indices (VIs - NDVI, EVI, WDRVIgreen and CIgreen) where GPP was predicted with two linear models, with and without offset: GPP = a × VI × PAR and GPP = a × VI × PAR + b. Model performance was evaluated with coefficient of determination (R2), root mean square error (RMSE), and coefficient of variation (CV). The MODIS data were filtered into four categories and four experiments were conducted to assess the impacts. The first experiment included all observations. The second experiment only included observations with view zenith angle (VZA) = 35? to constrain growth of the footprint size,which achieved a better grid cell match with the agricultural fields. The third experiment included only forward scatter observations with VZA = 35?. The fourth experiment included only backscatter observations with VZA = 35?. Overall, the EVI yielded the most consistently strong relationships to daily GPP under all examined conditions. The model GPP = a × VI × PAR + b had better performance than the model GPP = a × VI × PAR, and the offset was significant for most cases. Better performance was obtained for the irrigated field than its counterpart rainfed field. Comparison of experiment 2 vs. experiment 1 was used to examine the observation footprint impact whereas comparison of experiment 4 vs. experiment 3 was used to examine the BRDF impact. Changes in R2, RMSE,CV and changes in model coefficients "a" and "b" (experiment 2 vs. experiment 1; and experiment 4 vs. experiment 3) were indicators of the impacts. The second experiment produced better performance than the first experiment, increasing R2 (?0.13) and reducing RMSE (?0.68 g C m-2 d-1) and CV (?9%). For each VI, the slope of GPP = a × VI × PAR in the second experiment for each crop type changed little while the slope and intercept of GPP = a × VI × PAR + b varied field by field. The CIgreen was least affected by the MODIS observation footprint in estimating crop daily GPP (R2, ?0.08; RMSE, ?0.42 g C m-2 d-1; and CV, ?7%). Footprint most affected the NDVI (R2, ?0.15; CV, ?10%) and the EVI (RMSE, ?0.84 g C m-2 d-1). The vegetation BRDF impact also caused variation of model performance and change of model coefficients. Significantly different slopes were obtained for forward vs. backscatter observations, especially for the CIgreen and the NDVI. Both the footprint impact and the BRDF impact varied with crop types, irrigation options, model options and VI options.

  8. Perspectives on Teaching the International Classification of Functioning, Disability, and Health Model to Physical Therapy Students.

    PubMed

    Peters-Brinkerhoff, Cheryl

    2016-01-01

    During a reaccreditation visit, deficiencies were discovered in the clinical education curriculum regarding patient-centered care in a Doctorate of Physical Therapy program. To understand the problem and address those deficiencies, the clinical internship experience was examined using the International Classification of Functioning, Disability, and Health (ICF) model as a conceptual framework for clinical reasoning. This qualitative case study aimed to study (1) perceptions of physical therapy (PT) students regarding their knowledge and learning experiences during clinical affiliations and what knowledge they acquired of the ICF as applied to patient-centered care during their internship, and (2) the perceptions of clinical instructors (CIs) of their knowledge of the ICF model, its integration into their practice, barriers to its use, and the learning experiences the CIs provided to students regarding the ICF model. Data were collected using questionnaires sent to 42 CIs and at focus groups of 22 PT students conducted at the study site. Data were also collected from student evaluations on the Clinical Performance Instrument. Data were analyzed using coding techniques and themes based on the use of the ICF model in the clinical setting by students and CIs. Most CIs reported a poor understanding of the ICF model or how it relates to patient-centered care; both CIs and students reported none to minimal learning experience related to the ICF model. Document analysis of the student evaluations revealed no assessment of the ICF model was mentioned. Learning experiences of all domains of the ICF model are generally not being presented to PT students during their clinical affiliations.

  9. Summary of the SeaRISE Project's Experiments on Modeled Ice-Sheet Contributions to Future Sea Level: Linearities and Non-linearities

    NASA Astrophysics Data System (ADS)

    Bindschadler, Robert

    2013-04-01

    The SeaRISE (Sea-level Response to Ice Sheet Evolution) project achieved ice-sheet model ensemble responses to a variety of prescribed changes to surface mass balance, basal sliding and ocean boundary melting. Greenland ice sheet models are more sensitive than Antarctic ice sheet models to likely atmospheric changes in surface mass balance, while Antarctic models are most sensitive to basal melting of its ice shelves. An experiment approximating the IPCC's RCP8.5 scenario produces first century contributions to sea level of 22.3 and 7.3 cm from Greenland and Antarctica, respectively, with a range among models of 62 and 17 cm, respectively. By 200 years, these projections increase to 53.2 and 23.4 cm, respectively, with ranges of 79 and 57 cm. The considerable range among models was not only in the magnitude of ice lost, but also in the spatial pattern of response to identical forcing. Despite this variation, the response of any single model to a large range in the forcing intensity was remarkably linear in most cases. Additionally, the results of sensitivity experiments to single types of forcing (i.e., only one of the surface mass balance, or basal sliding, or ocean boundary melting) could be summed to accurately predict any model's result for an experiment when multiple forcings were applied simultaneously. This suggests a limited amount of feedback through the ice sheet's internal dynamics between these types of forcing over the time scale of a few centuries (SeaRISE experiments lasted 500 years).

  10. Dynamic model of target charging by short laser pulse interactions

    NASA Astrophysics Data System (ADS)

    Poyé, A.; Dubois, J.-L.; Lubrano-Lavaderci, F.; D'Humières, E.; Bardon, M.; Hulin, S.; Bailly-Grandvaux, M.; Ribolzi, J.; Raffestin, D.; Santos, J. J.; Nicolaï, Ph.; Tikhonchuk, V.

    2015-10-01

    A model providing an accurate estimate of the charge accumulation on the surface of a metallic target irradiated by a high-intensity laser pulse of fs-ps duration is proposed. The model is confirmed by detailed comparisons with specially designed experiments. Such a model is useful for understanding the electromagnetic pulse emission and the quasistatic magnetic field generation in laser-plasma interaction experiments.

  11. Dynamic model of target charging by short laser pulse interactions.

    PubMed

    Poyé, A; Dubois, J-L; Lubrano-Lavaderci, F; D'Humières, E; Bardon, M; Hulin, S; Bailly-Grandvaux, M; Ribolzi, J; Raffestin, D; Santos, J J; Nicolaï, Ph; Tikhonchuk, V

    2015-10-01

    A model providing an accurate estimate of the charge accumulation on the surface of a metallic target irradiated by a high-intensity laser pulse of fs-ps duration is proposed. The model is confirmed by detailed comparisons with specially designed experiments. Such a model is useful for understanding the electromagnetic pulse emission and the quasistatic magnetic field generation in laser-plasma interaction experiments.

  12. Virtual experiments: a new approach for improving process conceptualization in hillslope hydrology

    NASA Astrophysics Data System (ADS)

    Weiler, Markus; McDonnell, Jeff

    2004-01-01

    We present an approach for process conceptualization in hillslope hydrology. We develop and implement a series of virtual experiments, whereby the interaction between water flow pathways, source and mixing at the hillslope scale is examined within a virtual experiment framework. We define these virtual experiments as 'numerical experiments with a model driven by collective field intelligence'. The virtual experiments explore the first-order controls in hillslope hydrology, where the experimentalist and modeler work together to cooperatively develop and analyze the results. Our hillslope model for the virtual experiments (HillVi) in this paper is based on conceptualizing the water balance within the saturated and unsaturated zone in relation to soil physical properties in a spatially explicit manner at the hillslope scale. We argue that a virtual experiment model needs to be able to capture all major controls on subsurface flow processes that the experimentalist might deem important, while at the same time being simple with few 'tunable parameters'. This combination makes the approach, and the dialog between experimentalist and modeler, a useful hypothesis testing tool. HillVi simulates mass flux for different initial conditions under the same flow conditions. We analyze our results in terms of an artificial line source and isotopic hydrograph separation of water and subsurface flow. Our results for this first set of virtual experiments showed how drainable porosity and soil depth variability exert a first order control on flow and transport at the hillslope scale. We found that high drainable porosity soils resulted in a restricted water table rise, resulting in more pronounced channeling of lateral subsurface flow along the soil-bedrock interface. This in turn resulted in a more anastomosing network of tracer movement across the slope. The virtual isotope hydrograph separation showed higher proportions of event water with increasing drainable porosity. When combined with previous experimental findings and conceptualizations, virtual experiments can be an effective way to isolate certain controls and examine their influence over a range of rainfall and antecedent wetness conditions.

  13. Seismo-acoustic ray model benchmarking against experimental tank data.

    PubMed

    Camargo Rodríguez, Orlando; Collis, Jon M; Simpson, Harry J; Ey, Emanuel; Schneiderwind, Joseph; Felisberto, Paulo

    2012-08-01

    Acoustic predictions of the recently developed traceo ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications.

  14. Three-Dimensional Multiscale Modeling of Dendritic Spacing Selection During Al-Si Directional Solidification

    NASA Astrophysics Data System (ADS)

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain

    2015-08-01

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.

  15. Tissue Modeling and Analyzing with Finite Element Method: A Review for Cranium Brain Imaging

    PubMed Central

    Yue, Xianfang; Wang, Li; Wang, Ruonan

    2013-01-01

    For the structure mechanics of human body, it is almost impossible to conduct mechanical experiments. Then the finite element model to simulate mechanical experiments has become an effective tool. By introducing several common methods for constructing a 3D model of cranial cavity, this paper carries out systematically the research on the influence law of cranial cavity deformation. By introducing the new concepts and theory to develop the 3D cranial cavity model with the finite-element method, the cranial cavity deformation process with the changing ICP can be made the proper description and reasonable explanation. It can provide reference for getting cranium biomechanical model quickly and efficiently and lay the foundation for further biomechanical experiments and clinical applications. PMID:23476630

  16. Influence of atomic kinetics in the simulation of plasma microscopic properties and thermal instabilities for radiative bow shock experiments.

    PubMed

    Espinosa, G; Rodríguez, R; Gil, J M; Suzuki-Vidal, F; Lebedev, S V; Ciardi, A; Rubiano, J G; Martel, P

    2017-03-01

    Numerical simulations of laboratory astrophysics experiments on plasma flows require plasma microscopic properties that are obtained by means of an atomic kinetic model. This fact implies a careful choice of the most suitable model for the experiment under analysis. Otherwise, the calculations could lead to inaccurate results and inappropriate conclusions. First, a study of the validity of the local thermodynamic equilibrium in the calculation of the average ionization, mean radiative properties, and cooling times of argon plasmas in a range of plasma conditions of interest in laboratory astrophysics experiments on radiative shocks is performed in this work. In the second part, we have made an analysis of the influence of the atomic kinetic model used to calculate plasma microscopic properties of experiments carried out on magpie on radiative bow shocks propagating in argon. The models considered were developed assuming both local and nonlocal thermodynamic equilibrium and, for the latter situation, we have considered in the kinetic model different effects such as external radiation field and plasma mixture. The microscopic properties studied were the average ionization, the charge state distributions, the monochromatic opacities and emissivities, the Planck mean opacity, and the radiative power loss. The microscopic study was made as a postprocess of a radiative-hydrodynamic simulation of the experiment. We have also performed a theoretical analysis of the influence of these atomic kinetic models in the criteria for the onset possibility of thermal instabilities due to radiative cooling in those experiments in which small structures were experimentally observed in the bow shock that could be due to this kind of instability.

  17. Influence of atomic kinetics in the simulation of plasma microscopic properties and thermal instabilities for radiative bow shock experiments

    NASA Astrophysics Data System (ADS)

    Espinosa, G.; Rodríguez, R.; Gil, J. M.; Suzuki-Vidal, F.; Lebedev, S. V.; Ciardi, A.; Rubiano, J. G.; Martel, P.

    2017-03-01

    Numerical simulations of laboratory astrophysics experiments on plasma flows require plasma microscopic properties that are obtained by means of an atomic kinetic model. This fact implies a careful choice of the most suitable model for the experiment under analysis. Otherwise, the calculations could lead to inaccurate results and inappropriate conclusions. First, a study of the validity of the local thermodynamic equilibrium in the calculation of the average ionization, mean radiative properties, and cooling times of argon plasmas in a range of plasma conditions of interest in laboratory astrophysics experiments on radiative shocks is performed in this work. In the second part, we have made an analysis of the influence of the atomic kinetic model used to calculate plasma microscopic properties of experiments carried out on magpie on radiative bow shocks propagating in argon. The models considered were developed assuming both local and nonlocal thermodynamic equilibrium and, for the latter situation, we have considered in the kinetic model different effects such as external radiation field and plasma mixture. The microscopic properties studied were the average ionization, the charge state distributions, the monochromatic opacities and emissivities, the Planck mean opacity, and the radiative power loss. The microscopic study was made as a postprocess of a radiative-hydrodynamic simulation of the experiment. We have also performed a theoretical analysis of the influence of these atomic kinetic models in the criteria for the onset possibility of thermal instabilities due to radiative cooling in those experiments in which small structures were experimentally observed in the bow shock that could be due to this kind of instability.

  18. Large-Scale Experiments in Microbially Induced Calcite Precipitation (MICP): Reactive Transport Model Development and Prediction

    NASA Astrophysics Data System (ADS)

    Nassar, Mohamed K.; Gurung, Deviyani; Bastani, Mehrdad; Ginn, Timothy R.; Shafei, Babak; Gomez, Michael G.; Graddy, Charles M. R.; Nelson, Doug C.; DeJong, Jason T.

    2018-01-01

    Design of in situ microbially induced calcite precipitation (MICP) strategies relies on a predictive capability. To date much of the mathematical modeling of MICP has focused on small-scale experiments and/or one-dimensional flow in porous media, and successful parameterizations of models in these settings may not pertain to larger scales or to nonuniform, transient flows. Our objective in this article is to report on modeling to test our ability to predict behavior of MICP under controlled conditions in a meter-scale tank experiment with transient nonuniform transport in a natural soil, using independently determined parameters. Flow in the tank was controlled by three wells, via a complex cycle of injection/withdrawals followed by no-flow intervals. Different injection solution recipes were used in sequence for transport characterization, biostimulation, cementation, and groundwater rinse phases of the 17 day experiment. Reaction kinetics were calibrated using separate column experiments designed with a similar sequence of phases. This allowed for a parsimonious modeling approach with zero fitting parameters for the tank experiment. These experiments and data were simulated using PHT3-D, involving transient nonuniform flow, alternating low and high Damköhler reactive transport, and combined equilibrium and kinetically controlled biogeochemical reactions. The assumption that microbes mediating the reaction were exclusively sessile, and with constant activity, in conjunction with the foregoing treatment of the reaction network, provided for efficient and accurate modeling of the entire process leading to nonuniform calcite precipitation. This analysis suggests that under the biostimulation conditions applied here the assumption of steady state sessile biocatalyst suffices to describe the microbially mediated calcite precipitation.

  19. Binocular combination of luminance profiles

    PubMed Central

    Ding, Jian; Levi, Dennis M.

    2017-01-01

    We develop and test a new two-dimensional model for binocular combination of the two eyes' luminance profiles. For first-order stimuli, the model assumes that one eye's luminance profile first goes through a luminance compressor, receives gain-control and gain-enhancement from the other eye, and then linearly combines the other eye's output profile. For second-order stimuli, rectification is added in the signal path of the model before the binocular combination site. Both the total contrast and luminance energies, weighted sums over both the space and spatial-frequency domains, were used in the interocular gain-control, while only the total contrast energy was used in the interocular gain-enhancement. To challenge the model, we performed a binocular brightness matching experiment over a large range of background and target luminances. The target stimulus was a dichoptic disc with a sharp edge that has an increment or decrement luminance from its background. The disk's interocular luminance ratio varied from trial to trial. To refine the model we tested three luminance compressors, five nested binocular combination models (including the Ding–Sperling and the DSKL models), and examined the presence or absence of total luminance energy in the model. We found that (1) installing a luminance compressor, either a logarithmic luminance function or luminance gain-control, (2) including both contrast and luminance energies, and (3) adding interocular gain-enhancement (the DSKL model) to a combined model significantly improved its performance. The combined model provides a systematic account of binocular luminance summation over a large range of luminance input levels. It gives a unified explanation of Fechner's paradox observed on a dark background, and a winner-take-all phenomenon observed on a light background. To further test the model, we conducted two additional experiments: luminance summation of discs with asymmetric contour information (Experiment 2), similar to Levelt (1965) and binocular combination of second-order contrast-modulated gratings (Experiment 3). We used the model obtained in Experiment 1 to predict the results of Experiments 2 and 3 and the results of our previous studies. Model simulations further refined the contrast space weight and contrast sensitivity functions that are installed in the model, and provide a reasonable account for rebalancing of imbalanced binocular vision by reducing the mean luminance in the dominant eye. PMID:29098293

  20. Disconfirming User Expectations of the Online Service Experience: Inferred versus Direct Disconfirmation Modeling.

    ERIC Educational Resources Information Center

    O'Neill, Martin; Palmer, Adrian; Wright, Christine

    2003-01-01

    Disconfirmation models of online service measurement seek to define service quality as the difference between user expectations of the service to be received and perceptions of the service actually received. Two such models-inferred and direct disconfirmation-for measuring quality of the online experience are compared (WebQUAL, SERVQUAL). Findings…

  1. Investigating the Use of Vicarious and Mastery Experiences in Influencing Early Childhood Education Majors' Self-Efficacy Beliefs

    ERIC Educational Resources Information Center

    Bautista, Nazan Uludag

    2011-01-01

    This study investigated the effectiveness of an Early Childhood Education science methods course that focused exclusively on providing various mastery (i.e., enactive, cognitive content, and cognitive pedagogical) and vicarious experiences (i.e., cognitive self-modeling, symbolic modeling, and simulated modeling) in increasing preservice…

  2. An Experimental Test of the Contingency Model of Leadership Effectiveness.

    ERIC Educational Resources Information Center

    Chemers, Martin M.; Skrzypek, George J.

    The present experiment provided a test of Fiedler's (1967) Contingency Model of Leadership Effectiveness, i.e., the relationship of leader style to group effectiveness is mediated by situational demands. Thirty-two 4 man task groups composed of military academy cadets were run in the experiment. In accordance with the Contingency Model, leaders…

  3. Exploring a Comprehensive Model for Early Childhood Vocabulary Instruction: A Design Experiment

    ERIC Educational Resources Information Center

    Wang, X. Christine; Christ, Tanya; Chiu, Ming Ming

    2014-01-01

    Addressing a critical need for effective vocabulary practices in early childhood classrooms, we conducted a design experiment to achieve three goals: (1) developing a comprehensive model for early childhood vocabulary instruction, (2) examining the effectiveness of this model, and (3) discerning the contextual conditions that hinder or facilitate…

  4. An Evaluation of Psychophysical Models of Auditory Change Perception

    ERIC Educational Resources Information Center

    Micheyl, Christophe; Kaernbach, Christian; Demany, Laurent

    2008-01-01

    In many psychophysical experiments, the participant's task is to detect small changes along a given stimulus dimension or to identify the direction (e.g., upward vs. downward) of such changes. The results of these experiments are traditionally analyzed with a constant-variance Gaussian (CVG) model or a high-threshold (HT) model. Here, the authors…

  5. Job Performance as Multivariate Dynamic Criteria: Experience Sampling and Multiway Component Analysis.

    PubMed

    Spain, Seth M; Miner, Andrew G; Kroonenberg, Pieter M; Drasgow, Fritz

    2010-08-06

    Questions about the dynamic processes that drive behavior at work have been the focus of increasing attention in recent years. Models describing behavior at work and research on momentary behavior indicate that substantial variation exists within individuals. This article examines the rationale behind this body of work and explores a method of analyzing momentary work behavior using experience sampling methods. The article also examines a previously unused set of methods for analyzing data produced by experience sampling. These methods are known collectively as multiway component analysis. Two archetypal techniques of multimode factor analysis, the Parallel factor analysis and the Tucker3 models, are used to analyze data from Miner, Glomb, and Hulin's (2010) experience sampling study of work behavior. The efficacy of these techniques for analyzing experience sampling data is discussed as are the substantive multimode component models obtained.

  6. MODELING AND ANALYSIS OF FISSION PRODUCT TRANSPORT IN THE AGR-3/4 EXPERIMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humrickhouse, Paul W.; Collin, Blaise P.; Hawkes, Grant L.

    In this work we describe the ongoing modeling and analysis efforts in support of the AGR-3/4 experiment. AGR-3/4 is intended to provide data to assess fission product retention and transport (e.g., diffusion coefficients) in fuel matrix and graphite materials. We describe a set of pre-test predictions that incorporate the results of detailed thermal and fission product release models into a coupled 1D radial diffusion model of the experiment, using diffusion coefficients reported in the literature for Ag, Cs, and Sr. We make some comparisons of the predicted Cs profiles to preliminary measured data for Cs and find these to bemore » reasonable, in most cases within an order of magnitude. Our ultimate objective is to refine the diffusion coefficients using AGR-3/4 data, so we identify an analytical method for doing so and demonstrate its efficacy via a series of numerical experiments using the model predictions. Finally, we discuss development of a post-irradiation examination plan informed by the modeling effort and simulate some of the heating tests that are tentatively planned.« less

  7. Integrating conceptual knowledge within and across representational modalities.

    PubMed

    McNorgan, Chris; Reid, Jackie; McRae, Ken

    2011-02-01

    Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants' knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1-4 is consistent with a deep integration hierarchy. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. A coastal three-dimensional water quality model of nitrogen in Jiaozhou Bay linking field experiments with modelling.

    PubMed

    Lu, Dongliang; Li, Keqiang; Liang, Shengkang; Lin, Guohong; Wang, Xiulin

    2017-01-15

    With anthropogenic changes, the structure and quantity of nitrogen nutrients have changed in coastal ocean, which has dramatically influenced the water quality. Water quality modeling can contribute to the necessary scientific grounding of coastal management. In this paper, some of the dynamic functions and parameters of nitrogen were calibrated based on coastal field experiments covering the dynamic nitrogen processes in Jiaozhou Bay (JZB), including phytoplankton growth, respiration, and mortality; particulate nitrogen degradation; and dissolved organic nitrogen remineralization. The results of the field experiments and box model simulations showed good agreement (RSD=20%±2% and SI=0.77±0.04). A three-dimensional water quality model of nitrogen (3DWQMN) in JZB was improved and the dynamic parameters were updated according to field experiments. The 3DWQMN was validated based on observed data from 2012 to 2013, with good agreement (RSD=27±4%, SI=0.68±0.06, and K=0.48±0.04), which testifies to the model's credibility. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. ISMIP6 - initMIP: Greenland ice sheet model initialisation experiments

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Payne, Tony; Larour, Eric; Abe Ouchi, Ayako; Gregory, Jonathan; Lipscomb, William; Seroussi, Helene; Shepherd, Andrew; Edwards, Tamsin

    2016-04-01

    Earlier large-scale Greenland ice sheet sea-level projections e.g. those run during ice2sea and SeaRISE initiatives have shown that ice sheet initialisation can have a large effect on the projections and gives rise to important uncertainties. This intercomparison exercise (initMIP) aims at comparing, evaluating and improving the initialization techniques used in the ice sheet modeling community and to estimate the associated uncertainties. It is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). The experiments are conceived for the large-scale Greenland ice sheet and are designed to allow intercomparison between participating models of 1) the initial present-day state of the ice sheet and 2) the response in two schematic forward experiments. The latter experiments serve to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss first results of the intercomparison and highlight important uncertainties with respect to projections of the Greenland ice sheet sea-level contribution.

  10. Hollow-Fiber Cartridges: Model Systems for Virus Removal from Blood

    NASA Astrophysics Data System (ADS)

    Jacobitz, Frank; Menon, Jeevan

    2005-11-01

    Aethlon Medical is developing a hollow-fiber hemodialysis device designed to remove viruses and toxins from blood. Possible target viruses include HIV and pox-viruses. The filter could reduce virus and viral toxin concentration in the patient's blood, delaying illness so the patient's immune system can fight off the virus. In order to optimize the design of such a filter, the fluid mechanics of the device is both modeled analytically and investigated experimentally. The flow configuration of the proposed device is that of Starling flow. Polysulfone hollow-fiber dialysis cartridges were used. The cartridges are charged with water as a model fluid for blood and fluorescent latex beads are used in the experiments as a model for viruses. In the experiments, properties of the flow through the cartridge are determined through pressure and volume flow rate measurements of water. The removal of latex beads, which are captured in the porous walls of the fibers, was measured spectrophotometrically. Experimentally derived coefficients derived from these experiments are used in the analytical model of the flow and removal predictions from the model are compared to those obtained from the experiments.

  11. Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines

    NASA Astrophysics Data System (ADS)

    Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.

    2016-12-01

    Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.

  12. Design of model experiments for melt flow and solidification in a square container under time-dependent magnetic fields

    NASA Astrophysics Data System (ADS)

    Meier, D.; Lukin, G.; Thieme, N.; Bönisch, P.; Dadzis, K.; Büttner, L.; Pätzold, O.; Czarske, J.; Stelter, M.

    2017-03-01

    This paper describes novel equipment for model experiments designed for detailed studies on electromagnetically driven flows as well as solidification and melting processes with low-melting metals in a square-based container. Such model experiments are relevant for a validation of numerical flow simulation, in particular in the field of directional solidification of multi-crystalline photovoltaic silicon ingots. The equipment includes two square-shaped electromagnetic coils and a melt container with a base of 220×220 mm2 and thermostat-controlled heat exchangers at top and bottom. A system for dual-plane, spatial- and time-resolved flow measurements as well as for in-situ tracking of the solid-liquid interface is developed on the basis of the ultrasound Doppler velocimetry. The parameters of the model experiment are chosen to meet the scaling laws for a transfer of experimental results to real silicon growth processes. The eutectic GaInSn alloy and elemental gallium with melting points of 10.5 °C and 29.8 °C, respectively, are used as model substances. Results of experiments for testing the equipment are presented and discussed.

  13. Beyond standard model searches in the MiniBooNE experiment

    DOE PAGES

    Katori, Teppei; Conrad, Janet M.

    2014-08-05

    Tmore » he MiniBooNE experiment has contributed substantially to beyond standard model searches in the neutrino sector. he experiment was originally designed to test the Δ m 2 ~ 1 eV 2 region of the sterile neutrino hypothesis by observing ν e ( ν - e ) charged current quasielastic signals from a ν μ ( ν - μ ) beam. MiniBooNE observed excesses of ν e and ν - e candidate events in neutrino and antineutrino mode, respectively. o date, these excesses have not been explained within the neutrino standard model ( ν SM); the standard model extended for three massive neutrinos. Confirmation is required by future experiments such as MicroBooNE. MiniBooNE also provided an opportunity for precision studies of Lorentz violation. he results set strict limits for the first time on several parameters of the standard-model extension, the generic formalism for considering Lorentz violation. Most recently, an extension to MiniBooNE running, with a beam tuned in beam-dump mode, is being performed to search for dark sector particles. In addition, this review describes these studies, demonstrating that short baseline neutrino experiments are rich environments in new physics searches.« less

  14. Simulation of Containment Atmosphere Mixing and Stratification Experiment in the ThAI Facility with a CFD Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babic, Miroslav; Kljenak, Ivo; Mavko, Borut

    2006-07-01

    The CFD code CFX4.4 was used to simulate an experiment in the ThAI facility, which was designed for investigation of thermal-hydraulic processes during a severe accident inside a Light Water Reactor containment. In the considered experiment, air was initially present in the vessel, and helium and steam were injected during different phases of the experiment at various mass flow rates and at different locations. The main purpose of the proposed work was to assess the capabilities of the CFD code to reproduce the atmosphere structure with a three-dimensional model, coupled with condensation models proposed by the authors. A three-dimensional modelmore » of the ThAI vessel for the CFX4.4 code was developed. The flow in the simulation domain was modeled as single-phase. Steam condensation on vessel walls was modeled as a sink of mass and energy using a correlation that was originally developed for an integral approach. A simple model of bulk phase change was also included. Calculated time-dependent variables together with temperature and volume fraction distributions at the end of different experiment phases are compared to experimental results. (authors)« less

  15. Cosmic-Ray Background Flux Model based on a Gamma-Ray Large-Area Space Telescope Balloon Flight Engineering Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mizuno, T

    2004-09-03

    Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4{pi} sr), the sensitive energy range of the instrument ({approx} 10 MeV to 100 GeV) and abundant components (proton, alpha, e{sup -}, e{sup +}, {mu}{sup -}, {mu}{sup +} and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functionsmore » in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.« less

  16. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGES

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; ...

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  17. Towards an Integrated Model of the NIC Layered Implosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, O S; Callahan, D A; Cerjan, C J

    A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-45% of the calculated yields.« less

  18. Study of the influence of the parameters of an experiment on the simulation of pole figures of polycrystalline materials using electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonova, A. O., E-mail: aoantonova@mail.ru; Savyolova, T. I.

    2016-05-15

    A two-dimensional mathematical model of a polycrystalline sample and an experiment on electron backscattering diffraction (EBSD) is considered. The measurement parameters are taken to be the scanning step and threshold grain-boundary angle. Discrete pole figures for materials with hexagonal symmetry have been calculated based on the results of the model experiment. Discrete and smoothed (by the kernel method) pole figures of the model sample and the samples in the model experiment are compared using homogeneity criterion χ{sup 2}, an estimate of the pole figure maximum and its coordinate, a deviation of the pole figures of the model in the experimentmore » from the sample in the space of L{sub 1} measurable functions, and the RP-criterion for estimating the pole figure errors. Is is shown that the problem of calculating pole figures is ill-posed and their determination with respect to measurement parameters is not reliable.« less

  19. Modeling the Nab Experiment Electronics in SPICE

    NASA Astrophysics Data System (ADS)

    Blose, Alexander; Crawford, Christopher; Sprow, Aaron; Nab Collaboration

    2017-09-01

    The goal of the Nab experiment is to measure the neutron decay coefficients a, the electron-neutrino correlation, as well as b, the Fierz interference term to precisely test the Standard Model, as well as probe for Beyond the Standard Model physics. In this experiment, protons from the beta decay of the neutron are guided through a magnetic field into a Silicon detector. Event reconstruction will be achieved via time-of-flight measurement for the proton and direct measurement of the coincident electron energy in highly segmented silicon detectors, so the amplification circuitry needs to preserve fast timing, provide good amplitude resolution, and be packaged in a high-density format. We have designed a SPICE simulation to model the full electronics chain for the Nab experiment in order to understand the contributions of each stage and optimize them for performance. Additionally, analytic solutions to each of the components have been determined where available. We will present a comparison of the output from the SPICE model, analytic solution, and empirically determined data.

  20. Three-dimensional computer model for the atmospheric general circulation experiment

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.

    1984-01-01

    An efficient, flexible, three-dimensional, hydrodynamic, computer code has been developed for a spherical cap geometry. The code will be used to simulate NASA's Atmospheric General Circulation Experiment (AGCE). The AGCE is a spherical, baroclinic experiment which will model the large-scale dynamics of our atmosphere; it has been proposed to NASA for future Spacelab flights. In the AGCE a radial dielectric body force will simulate gravity, with hot fluid tending to move outwards. In order that this force be dominant, the AGCE must be operated in a low gravity environment such as Spacelab. The full potential of the AGCE will only be realized by working in conjunction with an accurate computer model. Proposed experimental parameter settings will be checked first using model runs. Then actual experimental results will be compared with the model predictions. This interaction between experiment and theory will be very valuable in determining the nature of the AGCE flows and hence their relationship to analytical theories and actual atmospheric dynamics.

  1. Marine Radioactivity Studies in the Suez Canal, Part II: Field Experiments and a Modelling Study of Dispersion

    NASA Astrophysics Data System (ADS)

    Abril, J. M.; Abdel-Aal, M. M.; Al-Gamal, S. A.; Abdel-Hay, F. A.; Zahar, H. M.

    2000-04-01

    In this paper we take advantage of the two field tracing experiments carried out under the IAEA project EGY/07/002, to develop a modelling study on the dispersion of radioactive pollution in the Suez Canal. The experiments were accomplished by using rhodamine B as a tracer, and water samples were measured by luminescence spectrometry. The presence of natural luminescent particles in the canal waters limited the use of some field data. During experiments, water levels, velocities, wind and other physical parameters were recorded to supply appropriate information for the modelling work. From this data set, the hydrodynamics of the studied area has been reasonably described. We apply a 1-D-Gaussian and 2-D modelling approaches to predict the position and the spatial shape of the plume. The use of different formulations for dispersion coefficients is studied. These dispersion coefficients are then applied in a 2-D-hydrodynamic and dispersion model for the Bitter Lake to investigate different scenarios of accidental discharges.

  2. The forgotten artist: Why to consider intentions and interaction in a model of aesthetic experience. Comment on "Move me, astonish me... delight my eyes and brain: The Vienna Integrated Model of top-down and bottom-up processes in Art Perception (VIMAP) and corresponding affective, evaluative, and neurophysiological correlates" by Matthew Pelowski et al.

    NASA Astrophysics Data System (ADS)

    Brattico, Elvira; Brattico, Pauli; Vuust, Peter

    2017-07-01

    In their target article published in this journal issue, Pelowski et al. [1] address the question of how humans experience, and respond to, visual art. They propose a multi-layered model of the representations and processes involved in assessing visual art objects that, furthermore, involves both bottom-up and top-down elements. Their model provides predictions for seven different outcomes of human aesthetic experience, based on few distinct features (schema congruence, self-relevance, and coping necessity), and connects the underlying processing stages to ;specific correlates of the brain; (a similar attempt was previously done for music by [2-4]). In doing this, the model aims to account for the (often profound) experience of an individual viewer in front of an art object.

  3. Geotechnical centrifuge use at University of Cambridge Geotechnical Centre, August-September 1991

    NASA Astrophysics Data System (ADS)

    Gilbert, Paul A.

    1992-01-01

    A geotechnical centrifuge applies elevated acceleration to small-scale soil models to simulate body forces and stress levels characteristic of full-size soil structures. Since the constitutive behavior of soil is stress level development, the centrifuge offers considerable advantage in studying soil structures using models. Several experiments were observed and described in relative detail, including experiments in soil dynamics and liquefaction study, an experiment investigation leaning towers on soft foundations, and an experiment investigating migration of hot pollutants through soils.

  4. Experiments And Model Development For The Investigation Of Sooting And Radiation Effects In Microgravity Droplet Combustion

    NASA Technical Reports Server (NTRS)

    Yozgatligil, Ahmet; Choi, Mun Young; Dryer, Frederick L.; Kazakov, Andrei; Dobashi, Ritsu

    2003-01-01

    This study involves flight experiments (for droplets between 1.5 to 5 mm) and supportive ground-based experiments, with concurrent numerical model development and validation. The experiments involve two fuels: n-heptane, and ethanol. The diagnostic measurements include light extinction for soot volume fraction, two-wavelength pyrometry and thin-filament pyrometry for temperature, spectral detection for OH chemiluminescence, broadband radiometry for flame emission, and thermophoretic sampling with subsequent transmission electron microscopy for soot aerosol property calculations.

  5. The MODE family of facility class experiments

    NASA Technical Reports Server (NTRS)

    Miller, David W.

    1992-01-01

    The objective of the Middeck 0-gravity Dynamics Experiment (MODE) is to characterize fundamental 0-g slosh behavior and obtain quantitative data on slosh force and spacecraft response for correlation of the analytical model. The topics are presented in viewgraph form and include the following: space results; STA objectives, requirements, and approach; comparison of ground to orbital data for the baseline configuration; conclusions of orbital testing; flight experiment resources; Middeck Active Control Experiment (MACE); MACE 1-G and 0-G models; and future efforts.

  6. Applying modeling Results in designing a global tropospheric experiment

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A set of field experiments and advanced modeling studies which provide a strategy for a program of global tropospheric experiments was identified. An expanded effort to develop space applications for trospheric air quality monitoring and studies was recommended. The tropospheric ozone, carbon, nitrogen, and sulfur cycles are addressed. Stratospheric-tropospheric exchange is discussed. Fast photochemical processes in the free troposphere are considered.

  7. Field experimental data for crop modeling of wheat growth response to nitrogen fertilizer, elevated CO2, water stress, and high temperature

    USDA-ARS?s Scientific Manuscript database

    Field experimental data of five experiments covering a wide range Field experimental data of five experiments covering a wide range of growing conditions are assembled for wheat growth and cropping systems modeling. The data include (i) an experiment on interactive effects of elevated CO2 by water a...

  8. Prenatal Experiences of Containment in the Light of Bion's Model of Container/Contained

    ERIC Educational Resources Information Center

    Maiello, Suzanne

    2012-01-01

    This paper explores the idea of possible proto-experiences of the prenatal child in the context of Bion's model of container/contained. The physical configuration of the embryo/foetus contained in the maternal uterus represents the starting point for an enquiry into the unborn child's possible experiences of its state of being contained in a…

  9. Online Community-Based Learning as the Practice of Freedom: The Online Capstone Experience at Portland State University

    ERIC Educational Resources Information Center

    Arthur, Deborah Smith; Newton-Calvert, Zapoura

    2015-01-01

    Given the design of Portland State University's (PSU) undergraduate curriculum culminating in a capstone experience, the dramatic growth in online courses and online enrollments required a re-thinking of the capstone model to ensure all students could participate in this effective learning model and have a powerful learning experience. In recent…

  10. Student Attitudes towards and Use of ICT in Course Study, Work and Social Activity: A Technology Acceptance Model Approach

    ERIC Educational Resources Information Center

    Edmunds, Rob; Thorpe, Mary; Conole, Grainne

    2012-01-01

    The increasing use of information and communication technology (ICT) in higher education has been explored largely in relation to student experience of coursework and university life. Students' lives and experience beyond the university have been largely unexplored. Research into student experience of ICT used a validated model--the technology…

  11. An Initial Model for Generative Design Research: Bringing Together Generative Focus Group (GFG) and Experience Reflection Modelling (ERM)

    ERIC Educational Resources Information Center

    Bakirlioglu, Yekta; Ogur, Dilruba; Dogan, Cagla; Turhan, Senem

    2016-01-01

    Understanding people's experiences and the context of use of a product at the earliest stages of the design process has in the last decade become an important aspect of both the design profession and design education. Generative design research helps designers understand user experiences, while also throwing light on their current needs,…

  12. R-X Modeling Figures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goda, Joetta Marie; Miller, Thomas; Grogan, Brandon

    2016-10-26

    This document contains figures that will be included in an ORNL final report that details computational efforts to model an irradiation experiment performed on the Godiva IV critical assembly. This experiment was a collaboration between LANL and ORNL.

  13. Indoor Astronomy: A Model Eclipsing Binary Star System.

    ERIC Educational Resources Information Center

    Bloomer, Raymond H., Jr.

    1979-01-01

    Describes a two-hour physics laboratory experiment modeling the phenomena of eclipsing binary stars developed by the Air Force Academy as part of a week-long laboratory-oriented experience for visiting high school students. (BT)

  14. The relationship between experiences of discrimination and mental health among lesbians and gay men: An examination of internalized homonegativity and rejection sensitivity as potential mechanisms.

    PubMed

    Feinstein, Brian A; Goldfried, Marvin R; Davila, Joanne

    2012-10-01

    The current study used path analysis to examine potential mechanisms through which experiences of discrimination influence depressive and social anxiety symptoms. The sample included 218 lesbians and 249 gay men (total N = 467) who participated in an online survey about minority stress and mental health. The proposed model included 2 potential mediators-internalized homonegativity and rejection sensitivity-as well as a culturally relevant antecedent to experiences of discrimination-childhood gender nonconformity. Results indicated that the data fit the model well, supporting the mediating roles of internalized homonegativity and rejection sensitivity in the associations between experiences of discrimination and symptoms of depression and social anxiety. Results also supported the role of childhood gender nonconformity as an antecedent to experiences of discrimination. Although there were not significant gender differences in the overall model fit, some of the associations within the model were significantly stronger for gay men than lesbians. These findings suggest potential mechanisms through which experiences of discrimination influence well-being among sexual minorities, which has important implications for research and clinical practice with these populations. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  15. Modeling of N2 and O optical emissions for ionosphere HF powerful heating experiments

    NASA Astrophysics Data System (ADS)

    Sergienko, T.; Gustavsson, B.

    Analyses of experiments of F region ionosphere modification by HF powerful radio waves show that optical observations are very useful tools for diagnosing of the interaction of the probing radio wave with the ionospheric plasma Hitherto the emissions usually measured in the heating experiment have been the 630 0 nm and the 557 7 nm lines of atomic oxygen Other emissions for instance O 844 8 nm and N2 427 8 nm have been measured episodically in only a few experiments although the very rich optical spectrum of molecular nitrogen potentially involves important information about ionospheric plasma in the heated region This study addresses the modeling of optical emissions from the O and the N2 triplet states first positive second positive Vegard-Kaplan infrared afterglow and Wu-Benesch band systems excited under a condition of the ionosphere heating experiment The auroral triplet state population distribution model was modified for the ionosphere heating conditions by using the different electron distribution functions suggested by Mishin et al 2000 2003 and Gustavsson at al 2004 2005 Modeling results are discussed from the point of view of efficiency of measurements of the N2 emissions in future experiments

  16. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; Beckley, Matthew; Abe-Ouchi, Ayako; Aschwanden, Andy; Calov, Reinhard; Gagliardini, Olivier; Gillet-Chaulet, Fabien; Golledge, Nicholas R.; Gregory, Jonathan; Greve, Ralf; Humbert, Angelika; Huybrechts, Philippe; Kennedy, Joseph H.; Larour, Eric; Lipscomb, William H.; Le clec'h, Sébastien; Lee, Victoria; Morlighem, Mathieu; Pattyn, Frank; Payne, Antony J.; Rodehacke, Christian; Rückamp, Martin; Saito, Fuyuki; Schlegel, Nicole; Seroussi, Helene; Shepherd, Andrew; Sun, Sainan; van de Wal, Roderik; Ziemen, Florian A.

    2018-04-01

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. The goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within the Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.

  17. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    DOE PAGES

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; ...

    2018-04-19

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less

  18. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less

  19. Review and assessment of turbulence models for hypersonic flows

    NASA Astrophysics Data System (ADS)

    Roy, Christopher J.; Blottner, Frederick G.

    2006-10-01

    Accurate aerodynamic prediction is critical for the design and optimization of hypersonic vehicles. Turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating for these systems. The first goal of this article is to update the previous comprehensive review of hypersonic shock/turbulent boundary-layer interaction experiments published in 1991 by Settles and Dodson (Hypersonic shock/boundary-layer interaction database. NASA CR 177577, 1991). In their review, Settles and Dodson developed a methodology for assessing experiments appropriate for turbulence model validation and critically surveyed the existing hypersonic experiments. We limit the scope of our current effort by considering only two-dimensional (2D)/axisymmetric flows in the hypersonic flow regime where calorically perfect gas models are appropriate. We extend the prior database of recommended hypersonic experiments (on four 2D and two 3D shock-interaction geometries) by adding three new geometries. The first two geometries, the flat plate/cylinder and the sharp cone, are canonical, zero-pressure gradient flows which are amenable to theory-based correlations, and these correlations are discussed in detail. The third geometry added is the 2D shock impinging on a turbulent flat plate boundary layer. The current 2D hypersonic database for shock-interaction flows thus consists of nine experiments on five different geometries. The second goal of this study is to review and assess the validation usage of various turbulence models on the existing experimental database. Here we limit the scope to one- and two-equation turbulence models where integration to the wall is used (i.e., we omit studies involving wall functions). A methodology for validating turbulence models is given, followed by an extensive evaluation of the turbulence models on the current hypersonic experimental database. A total of 18 one- and two-equation turbulence models are reviewed, and results of turbulence model assessments for the six models that have been extensively applied to the hypersonic validation database are compiled and presented in graphical form. While some of the turbulence models do provide reasonable predictions for the surface pressure, the predictions for surface heat flux are generally poor, and often in error by a factor of four or more. In the vast majority of the turbulence model validation studies we review, the authors fail to adequately address the numerical accuracy of the simulations (i.e., discretization and iterative error) and the sensitivities of the model predictions to freestream turbulence quantities or near-wall y+ mesh spacing. We recommend new hypersonic experiments be conducted which (1) measure not only surface quantities but also mean and fluctuating quantities in the interaction region and (2) provide careful estimates of both random experimental uncertainties and correlated bias errors for the measured quantities and freestream conditions. For the turbulence models, we recommend that a wide-range of turbulence models (including newer models) be re-examined on the current hypersonic experimental database, including the more recent experiments. Any future turbulence model validation efforts should carefully assess the numerical accuracy and model sensitivities. In addition, model corrections (e.g., compressibility corrections) should be carefully examined for their effects on a standard, low-speed validation database. Finally, as new experiments or direct numerical simulation data become available with information on mean and fluctuating quantities, they should be used to improve the turbulence models and thus increase their predictive capability.

  20. Primary process and peer consultation: an experiential model to work through countertransference.

    PubMed

    Markus, Howard E; Cross, Wendi F; Halewski, Paula G; Quallo, Hope; Smith, Sherrie; Sullivan, Marilyn; Sullivan, Peter; Tantillo, Mary

    2003-01-01

    Various models exist for peer supervision and consultation of group therapy. This article documents the authors' experience using an experiential group consultation of group therapy model that relies on primary process to overcome countertransference dilemmas. A review of group therapy supervision and consultation models is followed by vignettes from the authors' experience. Discussion of the vignettes highlight critical issues in group consultation and expound upon the strengths and challenges of using an experiential model.

  1. Droplet combustion experiment drop tower tests using models of the space flight apparatus

    NASA Technical Reports Server (NTRS)

    Haggard, J. B.; Brace, M. H.; Kropp, J. L.; Dryer, F. L.

    1989-01-01

    The Droplet Combustion Experiment (DCE) is an experiment that is being developed to ultimately operate in the shuttle environment (middeck or Spacelab). The current experiment implementation is for use in the 2.2 or 5 sec drop towers at NASA Lewis Research Center. Initial results were reported in the 1986 symposium of this meeting. Since then significant progress was made in drop tower instrumentation. The 2.2 sec drop tower apparatus, a conceptual level model, was improved to give more reproducible performance as well as operate over a wider range of test conditions. Some very low velocity deployments of ignited droplets were observed. An engineering model was built at TRW. This model will be used in the 5 sec drop tower operation to obtain science data. In addition, it was built using the flight design except for changes to accommodate the drop tower requirements. The mechanical and electrical assemblies have the same level of complexity as they will have in flight. The model was tested for functional operation and then delivered to NASA Lewis. The model was then integrated into the 5 sec drop tower. The model is currently undergoing initial operational tests prior to starting the science tests.

  2. Energy model for rumor propagation on social networks

    NASA Astrophysics Data System (ADS)

    Han, Shuo; Zhuang, Fuzhen; He, Qing; Shi, Zhongzhi; Ao, Xiang

    2014-01-01

    With the development of social networks, the impact of rumor propagation on human lives is more and more significant. Due to the change of propagation mode, traditional rumor propagation models designed for word-of-mouth process may not be suitable for describing the rumor spreading on social networks. To overcome this shortcoming, we carefully analyze the mechanisms of rumor propagation and the topological properties of large-scale social networks, then propose a novel model based on the physical theory. In this model, heat energy calculation formula and Metropolis rule are introduced to formalize this problem and the amount of heat energy is used to measure a rumor’s impact on a network. Finally, we conduct track experiments to show the evolution of rumor propagation, make comparison experiments to contrast the proposed model with the traditional models, and perform simulation experiments to study the dynamics of rumor spreading. The experiments show that (1) the rumor propagation simulated by our model goes through three stages: rapid growth, fluctuant persistence and slow decline; (2) individuals could spread a rumor repeatedly, which leads to the rumor’s resurgence; (3) rumor propagation is greatly influenced by a rumor’s attraction, the initial rumormonger and the sending probability.

  3. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  4. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-06-01

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  5. Risk-Based Fire Safety Experiment Definition for Manned Spacecraft

    NASA Technical Reports Server (NTRS)

    Apostolakis, G. E.; Ho, V. S.; Marcus, E.; Perry, A. T.; Thompson, S. L.

    1989-01-01

    Risk methodology is used to define experiments to be conducted in space which will help to construct and test the models required for accident sequence identification. The development of accident scenarios is based on the realization that whether damage occurs depends on the time competition of two processes: the ignition and creation of an adverse environment, and the detection and suppression activities. If the fire grows and causes damage faster than it is detected and suppressed, then an accident occurred. The proposed integrated experiments will provide information on individual models that apply to each of the above processes, as well as previously unidentified interactions and processes, if any. Initially, models that are used in terrestrial fire risk assessments are considered. These include heat and smoke release models, detection and suppression models, as well as damage models. In cases where the absence of gravity substantially invalidates a model, alternate models will be developed. Models that depend on buoyancy effects, such as the multizone compartment fire models, are included in these cases. The experiments will be performed in a variety of geometries simulating habitable areas, racks, and other spaces. These simulations will necessitate theoretical studies of scaling effects. Sensitivity studies will also be carried out including the effects of varying oxygen concentrations, pressures, fuel orientation and geometry, and air flow rates. The experimental apparatus described herein includes three major modules: the combustion, the fluids, and the command and power modules.

  6. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  7. Kinetic modeling of electro-Fenton reaction in aqueous solution.

    PubMed

    Liu, H; Li, X Z; Leng, Y J; Wang, C

    2007-03-01

    To well describe the electro-Fenton (E-Fenton) reaction in aqueous solution, a new kinetic model was established according to the generally accepted mechanism of E-Fenton reaction. The model has special consideration on the rates of hydrogen peroxide (H(2)O(2)) generation and consumption in the reaction solution. The model also embraces three key operating factors affecting the organic degradation in the E-Fenton reaction, including current density, dissolved oxygen concentration and initial ferrous ion concentration. This analytical model was then validated by the experiments of phenol degradation in aqueous solution. The experiments demonstrated that the H(2)O(2) gradually built up with time and eventually approached its maximum value in the reaction solution. The experiments also showed that phenol was degraded at a slow rate at the early stage of the reaction, a faster rate during the middle stage, and a slow rate again at the final stage. It was confirmed in all experiments that the curves of phenol degradation (concentration vs. time) appeared to be an inverted "S" shape. The experimental data were fitted using both the normal first-order model and our new model, respectively. The goodness of fittings demonstrated that the new model could better fit the experimental data than the first-order model appreciably, which indicates that this analytical model can better describe the kinetics of the E-Fenton reaction mathematically and also chemically.

  8. Livingstone Model-Based Diagnosis of Earth Observing One Infusion Experiment

    NASA Technical Reports Server (NTRS)

    Hayden, Sandra C.; Sweet, Adam J.; Christa, Scott E.

    2004-01-01

    The Earth Observing One satellite, launched in November 2000, is an active earth science observation platform. This paper reports on the progress of an infusion experiment in which the Livingstone 2 Model-Based Diagnostic engine is deployed on Earth Observing One, demonstrating the capability to monitor the nominal operation of the spacecraft under command of an on-board planner, and demonstrating on-board diagnosis of spacecraft failures. Design and development of the experiment, specification and validation of diagnostic scenarios, characterization of performance results and benefits of the model- based approach are presented.

  9. Experience of Time Passage:. Phenomenology, Psychophysics, and Biophysical Modelling

    NASA Astrophysics Data System (ADS)

    Wackermann, Jiří

    2005-10-01

    The experience of time's passing appears, from the 1st person perspective, to be a primordial subjective experience, seemingly inaccessible to the 3rd person accounts of time perception (psychophysics, cognitive psychology). In our analysis of the `dual klepsydra' model of reproduction of temporal durations, time passage occurs as a cognitive construct, based upon more elementary (`proto-cognitive') function of the psychophysical organism. This conclusion contradicts the common concepts of `subjective' or `psychological' time as readings of an `internal clock'. Our study shows how phenomenological, experimental and modelling approaches can be fruitfully combined.

  10. DEM Calibration Approach: design of experiment

    NASA Astrophysics Data System (ADS)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  11. Statistical models for causation: what inferential leverage do they provide?

    PubMed

    Freedman, David A

    2006-12-01

    Experiments offer more reliable evidence on causation than observational studies, which is not to gainsay the contribution to knowledge from observation. Experiments should be analyzed as experiments, not as observational studies. A simple comparison of rates might be just the right tool, with little value added by "sophisticated" models. This article discusses current models for causation, as applied to experimental and observational data. The intention-to-treat principle and the effect of treatment on the treated will also be discussed. Flaws in per-protocol and treatment-received estimates will be demonstrated.

  12. Rotating-fluid experiments with an atmospheric general circulation model

    NASA Technical Reports Server (NTRS)

    Geisler, J. E.; Pitcher, E. J.; Malone, R. C.

    1983-01-01

    In order to determine features of rotating fluid flow that are dependent on the geometry, rotating annulus-type experiments are carried out with a numerical model in spherical coordinates. Rather than constructing and testing a model expressly for this purpose, it is found expedient to modify an existing general circulation model of the atmosphere by removing the model physics and replacing the lower boundary with a uniform surface. A regime diagram derived from these model experiments is presented; its major features are interpreted and contrasted with the major features of rotating annulus regime diagrams. Within the wave regime, a narrow region is found where one or two zonal wave numbers are dominant. The results reveal no upper symmetric regime; wave activity at low rotation rates is thought to be maintained by barotropic rather than baroclinic processes.

  13. Identifiability of sorption parameters in stirred flow-through reactor experiments and their identification with a Bayesian approach.

    PubMed

    Nicoulaud-Gouin, V; Garcia-Sanchez, L; Giacalone, M; Attard, J C; Martin-Garin, A; Bois, F Y

    2016-10-01

    This paper addresses the methodological conditions -particularly experimental design and statistical inference- ensuring the identifiability of sorption parameters from breakthrough curves measured during stirred flow-through reactor experiments also known as continuous flow stirred-tank reactor (CSTR) experiments. The equilibrium-kinetic (EK) sorption model was selected as nonequilibrium parameterization embedding the K d approach. Parameter identifiability was studied formally on the equations governing outlet concentrations. It was also studied numerically on 6 simulated CSTR experiments on a soil with known equilibrium-kinetic sorption parameters. EK sorption parameters can not be identified from a single breakthrough curve of a CSTR experiment, because K d,1 and k - were diagnosed collinear. For pairs of CSTR experiments, Bayesian inference allowed to select the correct models of sorption and error among sorption alternatives. Bayesian inference was conducted with SAMCAT software (Sensitivity Analysis and Markov Chain simulations Applied to Transfer models) which launched the simulations through the embedded simulation engine GNU-MCSim, and automated their configuration and post-processing. Experimental designs consisting in varying flow rates between experiments reaching equilibrium at contamination stage were found optimal, because they simultaneously gave accurate sorption parameters and predictions. Bayesian results were comparable to maximum likehood method but they avoided convergence problems, the marginal likelihood allowed to compare all models, and credible interval gave directly the uncertainty of sorption parameters θ. Although these findings are limited to the specific conditions studied here, in particular the considered sorption model, the chosen parameter values and error structure, they help in the conception and analysis of future CSTR experiments with radionuclides whose kinetic behaviour is suspected. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Application of geometric approximation to the CPMG experiment: Two- and three-site exchange.

    PubMed

    Chao, Fa-An; Byrd, R Andrew

    2017-04-01

    The Carr-Purcell-Meiboom-Gill (CPMG) experiment is one of the most classical and well-known relaxation dispersion experiments in NMR spectroscopy, and it has been successfully applied to characterize biologically relevant conformational dynamics in many cases. Although the data analysis of the CPMG experiment for the 2-site exchange model can be facilitated by analytical solutions, the data analysis in a more complex exchange model generally requires computationally-intensive numerical analysis. Recently, a powerful computational strategy, geometric approximation, has been proposed to provide approximate numerical solutions for the adiabatic relaxation dispersion experiments where analytical solutions are neither available nor feasible. Here, we demonstrate the general potential of geometric approximation by providing a data analysis solution of the CPMG experiment for both the traditional 2-site model and a linear 3-site exchange model. The approximate numerical solution deviates less than 0.5% from the numerical solution on average, and the new approach is computationally 60,000-fold more efficient than the numerical approach. Moreover, we find that accurate dynamic parameters can be determined in most cases, and, for a range of experimental conditions, the relaxation can be assumed to follow mono-exponential decay. The method is general and applicable to any CPMG RD experiment (e.g. N, C', C α , H α , etc.) The approach forms a foundation of building solution surfaces to analyze the CPMG experiment for different models of 3-site exchange. Thus, the geometric approximation is a general strategy to analyze relaxation dispersion data in any system (biological or chemical) if the appropriate library can be built in a physically meaningful domain. Published by Elsevier Inc.

  15. Experiments and Modeling of G-Jitter Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Leslie, F. W.; Ramachandran, N.; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    While there is a general understanding of the acceleration environment onboard an orbiting spacecraft, past research efforts in the modeling and analysis area have still not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter can use to assess how an experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling for better understanding the effect of residual gravity and gentler on experiments. The approach is to use magnetic fluids that respond to an imposed magnetic field gradient in much the same way as fluid density responds to a gravitational field. By utilizing a programmable power source in conjunction with an electromagnet, both static and dynamic body forces can be simulated in lab experiments. The paper provides an overview of the technique and includes recent results from the experiments.

  16. Transactional processes in the development of adult personality disorder symptoms.

    PubMed

    Carlson, Elizabeth A; Ruiz, Sarah K

    2016-08-01

    The development of adult personality disorder symptoms, including transactional processes of relationship representational and behavioral experience from infancy to early adolescence, was examined using longitudinal data from a risk sample (N = 162). Significant preliminary correlations were found between early caregiving experience and adult personality disorder symptoms and between representational and behavioral indices across time and adult symptomatology. Significant correlations were also found among diverse representational assessments (e.g., interview, drawing, and projective narrative) and between concurrent representational and observational measures of relationship functioning. Path models were analyzed to investigate the combined relations of caregiving experience in infancy; relationship representation and experience in early childhood, middle childhood, and early adolescence; and personality disorder symptoms in adulthood. The hypothesized model representing interactive contributions of representational and behavioral experience represented the data significantly better than competing models representing noninteractive contributions. Representational and behavioral indicators mediated the link between early caregiving quality and personality disorder symptoms. The findings extend previous studies of normative development and support an organizational developmental view that early relationship experiences contribute to socioemotional maladaptation as well as adaptation through the progressive transaction of mutually informing expectations and experience.

  17. The photon identification loophole in EPRB experiments: computer models with single-wing selection

    NASA Astrophysics Data System (ADS)

    De Raedt, Hans; Michielsen, Kristel; Hess, Karl

    2017-11-01

    Recent Einstein-Podolsky-Rosen-Bohm experiments [M. Giustina et al. Phys. Rev. Lett. 115, 250401 (2015); L. K. Shalm et al. Phys. Rev. Lett. 115, 250402 (2015)] that claim to be loophole free are scrutinized. The combination of a digital computer and discrete-event simulation is used to construct a minimal but faithful model of the most perfected realization of these laboratory experiments. In contrast to prior simulations, all photon selections are strictly made, as they are in the actual experiments, at the local station and no other "post-selection" is involved. The simulation results demonstrate that a manifestly non-quantum model that identifies photons in the same local manner as in these experiments can produce correlations that are in excellent agreement with those of the quantum theoretical description of the corresponding thought experiment, in conflict with Bell's theorem which states that this is impossible. The failure of Bell's theorem is possible because of our recognition of the photon identification loophole. Such identification measurement-procedures are necessarily included in all actual experiments but are not included in the theory of Bell and his followers.

  18. Narrative Constructions for the Organization of Self Experience: Proof of Concept via Embodied Robotics

    PubMed Central

    Mealier, Anne-Laure; Pointeau, Gregoire; Mirliaz, Solène; Ogawa, Kenji; Finlayson, Mark; Dominey, Peter F.

    2017-01-01

    It has been proposed that starting from meaning that the child derives directly from shared experience with others, adult narrative enriches this meaning and its structure, providing causal links between unseen intentional states and actions. This would require a means for representing meaning from experience—a situation model—and a mechanism that allows information to be extracted from sentences and mapped onto the situation model that has been derived from experience, thus enriching that representation. We present a hypothesis and theory concerning how the language processing infrastructure for grammatical constructions can naturally be extended to narrative constructions to provide a mechanism for using language to enrich meaning derived from physical experience. Toward this aim, the grammatical construction models are augmented with additional structures for representing relations between events across sentences. Simulation results demonstrate proof of concept for how the narrative construction model supports multiple successive levels of meaning creation which allows the system to learn about the intentionality of mental states, and argument substitution which allows extensions to metaphorical language and analogical problem solving. Cross-linguistic validity of the system is demonstrated in Japanese. The narrative construction model is then integrated into the cognitive system of a humanoid robot that provides the memory systems and world-interaction required for representing meaning in a situation model. In this context proof of concept is demonstrated for how the system enriches meaning in the situation model that has been directly derived from experience. In terms of links to empirical data, the model predicts strong usage based effects: that is, that the narrative constructions used by children will be highly correlated with those that they experience. It also relies on the notion of narrative or discourse function words. Both of these are validated in the experimental literature. PMID:28861011

  19. Probing flavor models with ^{ {76}}Ge-based experiments on neutrinoless double-β decay

    NASA Astrophysics Data System (ADS)

    Agostini, Matteo; Merle, Alexander; Zuber, Kai

    2016-04-01

    The physics impact of a staged approach for double-β decay experiments based on ^{ {76}}Ge is studied. The scenario considered relies on realistic time schedules envisioned by the Gerda and the Majorana collaborations, which are jointly working towards the realization of a future larger scale ^{ {76}}Ge experiment. Intermediate stages of the experiments are conceived to perform quasi background-free measurements, and different data sets can be reliably combined to maximize the physics outcome. The sensitivity for such a global analysis is presented, with focus on how neutrino flavor models can be probed already with preliminary phases of the experiments. The synergy between theory and experiment yields strong benefits for both sides: the model predictions can be used to sensibly plan the experimental stages, and results from intermediate stages can be used to constrain whole groups of theoretical scenarios. This strategy clearly generates added value to the experimental efforts, while at the same time it allows to achieve valuable physics results as early as possible.

  20. Experiments Using a Ground-Based Electrostatic Levitator and Numerical Modeling of Melt Convection for the Iron-Cobalt System in Support of Space Experiments

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; SanSoucie, Michael P.

    2017-08-01

    Materials research is being conducted using an electromagnetic levitator installed in the International Space Station. Various metallic alloys were tested to elucidate unknown links among the structures, processes, and properties. To accomplish the mission of these space experiments, several ground-based activities have been carried out. This article presents some of our ground-based supporting experiments and numerical modeling efforts. Mass evaporation of Fe50Co50, one of flight compositions, was predicted numerically and validated by the tests using an electrostatic levitator (ESL). The density of various compositions within the Fe-Co system was measured with ESL. These results are being served as reference data for the space experiments. The convection inside a electromagnetically-levitated droplet was also modeled to predict the flow status, shear rate, and convection velocity under various process parameters, which is essential information for designing and analyzing the space experiments of some flight compositions influenced by convection.

  1. Convection Effects During Bulk Transparent Alloy Solidification in DECLIC-DSI and Phase-Field Simulations in Diffusive Conditions

    NASA Astrophysics Data System (ADS)

    Mota, F. L.; Song, Y.; Pereda, J.; Billia, B.; Tourret, D.; Debierre, J.-M.; Trivedi, R.; Karma, A.; Bergeon, N.

    2017-08-01

    To study the dynamical formation and evolution of cellular and dendritic arrays under diffusive growth conditions, three-dimensional (3D) directional solidification experiments were conducted in microgravity on a model transparent alloy onboard the International Space Station using the Directional Solidification Insert in the DEvice for the study of Critical LIquids and Crystallization. Selected experiments were repeated on Earth under gravity-driven fluid flow to evidence convection effects. Both radial and axial macrosegregation resulting from convection are observed in ground experiments, and primary spacings measured on Earth and microgravity experiments are noticeably different. The microgravity experiments provide unique benchmark data for numerical simulations of spatially extended pattern formation under diffusive growth conditions. The results of 3D phase-field simulations highlight the importance of accurately modeling thermal conditions that strongly influence the front recoil of the interface and the selection of the primary spacing. The modeling predictions are in good quantitative agreements with the microgravity experiments.

  2. Evaluation of an imputed pitch velocity model of the auditory kappa effect.

    PubMed

    Henry, Molly J; McAuley, J Devin

    2009-04-01

    Three experiments evaluated an imputed pitch velocity model of the auditory kappa effect. Listeners heard 3-tone sequences and judged the timing of the middle (target) tone relative to the timing of the 1st and 3rd (bounding) tones. Experiment 1 held pitch constant but varied the time (T) interval between bounding tones (T = 728, 1,000, or 1,600 ms) in order to establish baseline performance levels for the 3 values of T. Experiments 2 and 3 combined the values of T tested in Experiment 1 with a pitch manipulation in order to create fast (8 semitones/728 ms), medium (8 semitones/1,000 ms), and slow (8 semitones/1,600 ms) velocity conditions. Consistent with an auditory motion hypothesis, distortions in perceived timing were larger for fast than for slow velocity conditions for both ascending sequences (Experiment 2) and descending sequences (Experiment 3). Overall, results supported the proposed imputed pitch velocity model of the auditory kappa effect. (c) 2009 APA, all rights reserved.

  3. Modeling of turbulent supersonic H2-air combustion with an improved joint beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1991-01-01

    Attempts at modeling recent experiments of Cheng et al. indicated that discrepancies between theory and experiment can be a result of the form of assumed probability density function (PDF) and/or the turbulence model employed. Improvements in both the form of the assumed PDF and the turbulence model are presented. The results are again used to compare with measurements. Initial comparisons are encouraging.

  4. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  5. Toward an Understanding of the Cognitive Aspects of Data Fusion

    DTIC Science & Technology

    1998-12-14

    Models Static & Temporal Models Conscious Valuation & Teleological Models Subconscious Valuation Models (Gut Feelings) Judgement than a cause as is...evidence of the pictures of other people, biases them to interpret those wavy lines as a man with glasses. That is, they subconsciously value the...rat, people often experience discomfort (confusion) or sometimes laugh . What they experience “out there” in the world is no longer in congruity with

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rutqvist, Jonny; Blanco Martin, Laura; Mukhopadhyay, Sumit

    The modeling efforts in support of the field test planning conducted at LBNL leverage on recent developments of tools for modeling coupled thermal-hydrological-mechanical-chemical (THMC) processes in salt and their effect on brine migration at high temperatures. This work includes development related to, and implementation of, essential capabilities, as well as testing the model against relevant information and published experimental data related to the fate and transport of water. These are modeling capabilities that will be suitable for assisting in the design of field experiment, especially related to multiphase flow processes coupled with mechanical deformations, at high temperature. In this report,more » we first examine previous generic repository modeling results, focusing on the first 20 years to investigate the expected evolution of the different processes that could be monitored in a full-scale heater experiment, and then present new results from ongoing modeling of the Thermal Simulation for Drift Emplacement (TSDE) experiment, a heater experiment on the in-drift emplacement concept at the Asse Mine, Germany, and provide an update on the ongoing model developments for modeling brine migration. LBNL also supported field test planning activities via contributions to and technical review of framework documents and test plans, as well as participation in workshops associated with field test planning.« less

  7. The International Heat Stress Genotype Experiment for Modeling Wheat Response to Heat: Field Experiments and AgMIP-Wheat Multi-Model Simulations

    NASA Technical Reports Server (NTRS)

    Martre, Pierre; Reynolds, Matthew P.; Asseng, Senthold; Ewert, Frank; Alderman, Phillip D.; Cammarano, Davide; Maiorano, Andrea; Ruane, Alexander C.; Aggarwal, Pramod K.; Anothai, Jakarat; hide

    2017-01-01

    The data set contains a portion of the International Heat Stress Genotype Experiment (IHSGE) data used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat crop models and quantify the impact of heat on global wheat yield productivity. It includes two spring wheat cultivars grown during two consecutive winter cropping cycles at hot, irrigated, and low latitude sites in Mexico (Ciudad Obregon and Tlaltizapan), Egypt (Aswan), India (Dharwar), the Sudan (Wad Medani), and Bangladesh (Dinajpur). Experiments in Mexico included normal (November-December) and late (January-March) sowing dates. Data include local daily weather data, soil characteristics and initial soil conditions, crop measurements (anthesis and maturity dates, anthesis and final total above ground biomass, final grain yields and yields components), and cultivar information. Simulations include both daily in-season and end-of-season results from 30 wheat models.

  8. Two ecological models of academic achievement among diverse students with and without disabilities in transition.

    PubMed

    Williams, Terrinieka T; McMahon, Susan D; Keys, Christopher B

    2014-01-01

    School experiences can have positive effects on student academic achievement, yet less is known about intermediary processes that contribute to these positive effects. We examined pathways between school experiences and academic achievement among 117 low-income urban students of color, many with disabilities, who transitioned to other schools following a school closure. Using structural equation modeling, we tested two ecological models that examined the relationships among self-reported school experiences, school support, academic self-efficacy, and school-reported academic achievement. The model in which the relationship between school experiences and academic achievement is mediated by both school support and academic self-efficacy, and that takes previous academic achievement into account, was an excellent fit with the data. The roles of contextual and individual factors as they relate to academic achievement, and the implications of these findings, are discussed.

  9. Spatial covert attention increases contrast sensitivity across the CSF: support for signal enhancement

    NASA Technical Reports Server (NTRS)

    Carrasco, M.; Penpeci-Talgar, C.; Eckstein, M.

    2000-01-01

    This study is the first to report the benefits of spatial covert attention on contrast sensitivity in a wide range of spatial frequencies when a target alone was presented in the absence of a local post-mask. We used a peripheral precue (a small circle indicating the target location) to explore the effects of covert spatial attention on contrast sensitivity as assessed by orientation discrimination (Experiments 1-4), detection (Experiments 2 and 3) and localization (Experiment 3) tasks. In all four experiments the target (a Gabor patch ranging in spatial frequency from 0.5 to 10 cpd) was presented alone in one of eight possible locations equidistant from fixation. Contrast sensitivity was consistently higher for peripherally- than for neutrally-cued trials, even though we eliminated variables (distracters, global masks, local masks, and location uncertainty) that are known to contribute to an external noise reduction explanation of attention. When observers were presented with vertical and horizontal Gabor patches an external noise reduction signal detection model accounted for the cueing benefit in a discrimination task (Experiment 1). However, such a model could not account for this benefit when location uncertainty was reduced, either by: (a) Increasing overall performance level (Experiment 2); (b) increasing stimulus contrast to enable fine discriminations of slightly tilted suprathreshold stimuli (Experiment 3); and (c) presenting a local post-mask (Experiment 4). Given that attentional benefits occurred under conditions that exclude all variables predicted by the external noise reduction model, these results support the signal enhancement model of attention.

  10. The FIFE Project at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Box, D.; Boyd, J.; Di Benedetto, V.

    2016-01-01

    The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of solutions for high throughput computing, data management, database access and collaboration management within an experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid compute sites alongmore » with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including a common job submission service, software and reference data distribution through CVMFS repositories, flexible and robust data transfer clients, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken the leading role in defining the computing model for Fermilab experiments, aided in the design of experiments beyond those hosted at Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.« less

  11. Crazy like a fox. Validity and ethics of animal models of human psychiatric disease.

    PubMed

    Rollin, Michael D H; Rollin, Bernard E

    2014-04-01

    Animal models of human disease play a central role in modern biomedical science. Developing animal models for human mental illness presents unique practical and philosophical challenges. In this article we argue that (1) existing animal models of psychiatric disease are not valid, (2) attempts to model syndromes are undermined by current nosology, (3) models of symptoms are rife with circular logic and anthropomorphism, (4) any model must make unjustified assumptions about subjective experience, and (5) any model deemed valid would be inherently unethical, for if an animal adequately models human subjective experience, then there is no morally relevant difference between that animal and a human.

  12. Effects of Repetition Priming on Recognition Memory: Testing a Perceptual Fluency-Disfluency Model

    ERIC Educational Resources Information Center

    Huber, David E.; Clark, Tedra F.; Curran, Tim; Winkielman, Piotr

    2008-01-01

    Five experiments explored the effects of immediate repetition priming on episodic recognition (the "Jacoby-Whitehouse effect") as measured with forced-choice testing. These experiments confirmed key predictions of a model adapted from D. E. Huber and R. C. O'Reilly's (2003) dynamic neural network of perception. In this model, short prime durations…

  13. Relevance of the hadronic interaction model in the interpretation of multiple muon data as detected with the MACRO experiment

    NASA Astrophysics Data System (ADS)

    Ambrosio, M.; Antolini, R.; Aramo, C.; Auriemma, G.; Baldini, A.; Barbarino, G. C.; Barish, B. C.; Battistoni, G.; Bellotti, R.; Bemporad, C.; Bernardini, P.; Bilokon, H.; Bisi, V.; Bloise, C.; Bower, C.; Bussino, S.; Cafagna, F.; Calicchio, M.; Campana, D.; Carboni, M.; Castellano, M.; Cecchini, S.; Cei, F.; Chiarella, V.; Coutu, S.; de Benedictis, L.; de Cataldo, G.; Dekhissi, H.; de Marzo, C.; de Mitri, I.; de Vincenzi, M.; di Credico, A.; Erriquez, O.; Favuzzi, C.; Forti, C.; Fusco, P.; Giacomelli, G.; Giannini, G.; Giglietto, N.; Grassi, M.; Gray, L.; Grillo, A.; Guarino, F.; Guarnaccia, P.; Gustavino, C.; Habig, A.; Hanson, K.; Hawthorne, A.; Heinz, R.; Iarocci, E.; Katsavounidis, E.; Kearns, E.; Kyriazopoulou, S.; Lamanna, E.; Lane, C.; Levin, D. S.; Lipari, P.; Longley, N. P.; Longo, M. J.; Maaroufi, F.; Mancarella, G.; Mandrioli, G.; Manzoor, S.; Margiotta Neri, A.; Marini, A.; Martello, D.; Marzari-Chiesa, A.; Mazziotta, M. N.; Mazzotta, C.; Michael, D. G.; Mikheyev, S.; Miller, L.; Monacelli, P.; Montaruli, T.; Monteno, M.; Mufson, S.; Musser, J.; Nicoló, D.; Nolty, R.; Okada, C.; Orth, C.; Osteria, G.; Palamara, O.; Patera, V.; Patrizii, L.; Pazzi, R.; Peck, C. W.; Petrera, S.; Pistilli, P.; Popa, V.; Rainó, A.; Rastelli, A.; Reynoldson, J.; Ronga, F.; Rubizzo, U.; Sanzgiri, A.; Satriano, C.; Satta, L.; Scapparone, E.; Scholberg, K.; Sciubba, A.; Serra-Lugaresi, P.; Severi, M.; Sioli, M.; Sitta, M.; Spinelli, P.; Spinetti, M.; Spurio, M.; Steinberg, R.; Stone, J. L.; Sulak, L. R.; Surdo, A.; Tarlé, G.; Togo, V.; Walter, C. W.; Webb, R.

    1999-03-01

    With the aim of discussing the effect of the possible sources of systematic uncertainties in simulation models, the analysis of multiple muon events from the MACRO experiment at Gran Sasso is reviewed. In particular, the predictions from different currently available hadronic interaction models are compared.

  14. A Didactic Experiment and Model of a Flat-Plate Solar Collector

    ERIC Educational Resources Information Center

    Gallitto, Aurelio Agliolo; Fiordilino, Emilio

    2011-01-01

    We report on an experiment performed with a home-made flat-plate solar collector, carried out together with high-school students. To explain the experimental results, we propose a model that describes the heating process of the solar collector. The model accounts quantitatively for the experimental data. We suggest that solar-energy topics should…

  15. Modeling Japan-South Seas trade in forest products.

    Treesearch

    J.R. Vincent

    1987-01-01

    The international trade of forest products has generated increasing research interest, yet experience with modeling such trade is limited. Primary issues include the effects of trade barriers and exchange rates on trade patterns and national welfare. This paper attempts to add to experience by modeling hardwood log, lumber, and plywood trade in a region that has been...

  16. Complexity of Choice: Teachers' and Students' Experiences Implementing a Choice-Based Comprehensive School Health Model

    ERIC Educational Resources Information Center

    Sulz, Lauren; Gibbons, Sandra; Naylor, Patti-Jean; Wharf Higgins, Joan

    2016-01-01

    Background: Comprehensive School Health models offer a promising strategy to elicit changes in student health behaviours. To maximise the effect of such models, the active involvement of teachers and students in the change process is recommended. Objective: The goal of this project was to gain insight into the experiences and motivations of…

  17. Hidden Dangers of Computer Modelling: Remarks on Sokolik and Smith's Connectionist Learning Model of French Gender.

    ERIC Educational Resources Information Center

    Carroll, Susanne E.

    1995-01-01

    Criticizes the computer modelling experiments conducted by Sokolik and Smith (1992), which involved the learning of French gender attribution using connectionist architecture. The article argues that the experiments greatly oversimplified the complexity of gender learning, in that they were designed in such a way that knowledge that must be…

  18. Innovative Field Experiences in Teacher Education: Student-Teachers and Mentors as Partners in Teaching

    ERIC Educational Resources Information Center

    Baeten, Marlies; Simons, Mathea

    2016-01-01

    This study investigates team teaching between student teachers and mentors during student teachers' field experiences. A systematic literature search was conducted, which resulted into a narrative review. Three team teaching models could be distinguished: (1) the co-planning and co-evaluation model, (2) the assistant teaching model, and (3) the…

  19. Early Phases of Business Model Innovation: An Ideation Experience Workshop in the Classroom

    ERIC Educational Resources Information Center

    Hoveskog, M.; Halila, F.; Danilovic, M.

    2015-01-01

    As the mantra "innovate your business model or die" increases in popularity among practitioners and academics, so does the need for novel and feasible business models. In this article, we describe an ideation experience workshop, conducted in an undergraduate business course, in which students, guided by their lecturers and two industry…

  20. Enhancing the Effectiveness of Carbon Dioxide Flooding by Managing Asphaltene Precipitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deo, Milind D.

    2002-02-21

    This project was undertaken to understand fundamental aspects of carbon dioxide (CO2) induced asphaltene precipitation. Oil and asphaltene samples from the Rangely field in Colorado were used for most of the project. The project consisted of pure component and high-pressure, thermodynamic experiments, thermodynamic modeling, kinetic experiments and modeling, targeted corefloods and compositional modeling.

  1. Planetary X ray experiment: Supporting research for outer planets mission: Experiment definition phase

    NASA Technical Reports Server (NTRS)

    Hurley, K.; Anderson, K. A.

    1972-01-01

    Models of Jupiter's magnetosphere were examined to predict the X-ray flux that would be emitted in auroral or radiation zone processes. Various types of X-ray detection were investigated for energy resolution, efficiency, reliability, and background. From the model fluxes it was determined under what models Jovian X-rays could be detected.

  2. Three atmospheric dispersion experiments involving oil fog plumes measured by lidar

    NASA Technical Reports Server (NTRS)

    Eberhard, W. L.; Mcnice, G. T.; Troxel, S. W.

    1986-01-01

    The Wave Propagation Lab. participated with the U.S. Environmental Protection Agency in a series of experiments with the goal of developing and validating dispersion models that perform substantially better that models currently available. The lidar systems deployed and the data processing procedures used in these experiments are briefly described. Highlights are presented of conclusions drawn thus far from the lidar data.

  3. Reasoning, Problem Solving, and Intelligence.

    DTIC Science & Technology

    1980-04-01

    designed to test the validity of their model of response choice in analogical reason- ing. In the first experiment, they set out to demonstrate that...second experiment were somewhat consistent with the prediction. The third experiment used a concept-formation design in which subjects were required to... designed to show interrelationships between various forms of inductive reasoning. Their model fits were highly comparable to those of Rumelhart and

  4. Working memory for braille is shaped by experience.

    PubMed

    Cohen, Henri; Scherzer, Peter; Viau, Robert; Voss, Patrice; Lepore, Franco

    2011-03-01

    Tactile working memory was found to be more developed in completely blind (congenital and acquired) than in semi-sighted subjects, indicating that experience plays a crucial role in shaping working memory. A model of working memory, adapted from the classical model proposed by Baddeley and Hitch1 and Baddeley2 is presented where the connection strengths of a highly cross-modal network are altered through experience.

  5. Test of the Behavioral Perspective Model in the Context of an E-Mail Marketing Experiment

    ERIC Educational Resources Information Center

    Sigurdsson, Valdimar; Menon, R. G. Vishnu; Sigurdarson, Johannes Pall; Kristjansson, Jon Skafti; Foxall, Gordon R.

    2013-01-01

    An e-mail marketing experiment based on the behavioral perspective model was conducted to investigate consumer choice. Conversion e-mails were sent to two groups from the same marketing database of registered consumers interested in children's books. The experiment was based on A-B-A-C-A and A-C-A-B-A withdrawal designs and consisted of sending B…

  6. DEM Particle Fracture Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Boning; Herbold, Eric B.; Homel, Michael A.

    2015-12-01

    An adaptive particle fracture model in poly-ellipsoidal Discrete Element Method is developed. The poly-ellipsoidal particle will break into several sub-poly-ellipsoids by Hoek-Brown fracture criterion based on continuum stress and the maximum tensile stress in contacts. Also Weibull theory is introduced to consider the statistics and size effects on particle strength. Finally, high strain-rate split Hopkinson pressure bar experiment of silica sand is simulated using this newly developed model. Comparisons with experiments show that our particle fracture model can capture the mechanical behavior of this experiment very well, both in stress-strain response and particle size redistribution. The effects of density andmore » packings o the samples are also studied in numerical examples.« less

  7. Regional climate simulations over South America: sensitivity to model physics and to the treatment of lateral boundary conditions using the MM5 model

    NASA Astrophysics Data System (ADS)

    Solman, Silvina A.; Pessacg, Natalia L.

    2012-01-01

    In this study the capability of the MM5 model in simulating the main mode of intraseasonal variability during the warm season over South America is evaluated through a series of sensitivity experiments. Several 3-month simulations nested into ERA40 reanalysis were carried out using different cumulus schemes and planetary boundary layer schemes in an attempt to define the optimal combination of physical parameterizations for simulating alternating wet and dry conditions over La Plata Basin (LPB) and the South Atlantic Convergence Zone regions, respectively. The results were compared with different observational datasets and model evaluation was performed taking into account the spatial distribution of monthly precipitation and daily statistics of precipitation over the target regions. Though every experiment was able to capture the contrasting behavior of the precipitation during the simulated period, precipitation was largely underestimated particularly over the LPB region, mainly due to a misrepresentation in the moisture flux convergence. Experiments using grid nudging of the winds above the planetary boundary layer showed a better performance compared with those in which no constrains were imposed to the regional circulation within the model domain. Overall, no single experiment was found to perform the best over the entire domain and during the two contrasting months. The experiment that outperforms depends on the area of interest, being the simulation using the Grell (Kain-Fritsch) cumulus scheme in combination with the MRF planetary boundary layer scheme more adequate for subtropical (tropical) latitudes. The ensemble of the sensitivity experiments showed a better performance compared with any individual experiment.

  8. Clinical learning experiences of nursing students using an innovative clinical partnership model: A non-randomized controlled trial.

    PubMed

    Chan, Aileen W K; Tang, Fiona W K; Choi, Kai Chow; Liu, Ting; Taylor-Piliae, Ruth E

    2018-06-05

    Clinical practicum is a major learning component for pre-registration nursing students. Various clinical practicum models have been used to facilitate students' clinical learning experiences, employing both university-based and hospital-based clinical teachers. Considering the strengths and limitations of these clinical practicum models, along with nursing workforce shortages, we developed and tested an innovative clinical partnership model (CPM) in Hong Kong. To evaluate an innovative CPM among nursing students actual and preferred clinical learning environment, compared with a conventional facilitation model (CFM). A non-randomized controlled trial examining students' clinical experiences, comparing the CPM (supervised by hospital clinical teacher) with the CFM (supervised by university clinical teacher). One university in Hong Kong. Pre-registration nursing students (N = 331), including bachelor of nursing (n = 246 year three-BN) and masters-entry nursing (n = 85 year one-MNSP). Students were assigned to either the CPM (n = 48 BN plus n = 85 MNSP students) or the CFM (n = 198 BN students) for their clinical practice experiences in an acute medical-surgical ward. Clinical teachers supervised between 6 and 8 students at a time, during these clinical practicums (duration = 4-6 weeks). At the end of the clinical practicum, students were invited to complete the Clinical Learning Environment Inventory (CLEI). Analysis of covariance was used to compare groups; adjusted for age, gender and prior work experience. A total of 259 students (mean age = 22 years, 76% female, 81% prior work experience) completed the CLEI (78% response rate). Students had higher scores on preferred versus actual experiences, in all domains of the CLEI. CPM student experiences indicated a higher preferred task orientation (p = 0.004), while CFM student experiences indicated a higher actual (p < 0.001) and preferred individualization (p = 0.005). No significant differences were noted in the other domains. The CPM draws on the strengths of existing clinical learning models and provides complementary methods to facilitate clinical learning for pre-registration nursing students. Additional studies examining this CPM with longer duration of clinical practicum are recommended. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Integrated multiscale biomaterials experiment and modelling: a perspective

    PubMed Central

    Buehler, Markus J.; Genin, Guy M.

    2016-01-01

    Advances in multiscale models and computational power have enabled a broad toolset to predict how molecules, cells, tissues and organs behave and develop. A key theme in biological systems is the emergence of macroscale behaviour from collective behaviours across a range of length and timescales, and a key element of these models is therefore hierarchical simulation. However, this predictive capacity has far outstripped our ability to validate predictions experimentally, particularly when multiple hierarchical levels are involved. The state of the art represents careful integration of multiscale experiment and modelling, and yields not only validation, but also insights into deformation and relaxation mechanisms across scales. We present here a sampling of key results that highlight both challenges and opportunities for integrated multiscale experiment and modelling in biological systems. PMID:28981126

  10. Three-dimensional multiscale modeling of dendritic spacing selection during Al-Si directional solidification

    DOE PAGES

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...

    2015-05-27

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less

  11. Modeling Ullage Dynamics of Tank Pressure Control Experiment during Jet Mixing in Microgravity

    NASA Technical Reports Server (NTRS)

    Kartuzova, O.; Kassemi, M.

    2016-01-01

    A CFD model for simulating the fluid dynamics of the jet induced mixing process is utilized in this paper to model the pressure control portion of the Tank Pressure Control Experiment (TPCE) in microgravity1. The Volume of Fluid (VOF) method is used for modeling the dynamics of the interface during mixing. The simulations were performed at a range of jet Weber numbers from non-penetrating to fully penetrating. Two different initial ullage positions were considered. The computational results for the jet-ullage interaction are compared with still images from the video of the experiment. A qualitative comparison shows that the CFD model was able to capture the main features of the interfacial dynamics, as well as the jet penetration of the ullage.

  12. Elucidating the role of recovery experiences in the job demands-resources model.

    PubMed

    Moreno-Jiménez, Bernardo; Rodríguez-Muñoz, Alfredo; Sanz-Vergel, Ana Isabel; Garrosa, Eva

    2012-07-01

    Based on the Job Demands-Resources (JD-R) model, the current study examined the moderating role of recovery experiences (i.e., psychological detachment from work, relaxation, mastery experiences, and control over leisure time) on the relationship between one job demand (i.e., role conflict) and work- and health-related outcomes. Results from our sample of 990 employees from Spain showed that psychological detachment from work and relaxation buffered the negative impact of role conflict on some of the proposed outcomes. Contrary to our expectations, we did not find significant results for mastery and control regarding moderating effects. Overall, findings suggest a differential pattern of the recovery experiences in the health impairment process proposed by the JD-R model.

  13. Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Brandon, Jay M.

    2017-01-01

    Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.

  14. Amazon rainforest responses to elevated CO2: Deriving model-based hypotheses for the AmazonFACE experiment

    NASA Astrophysics Data System (ADS)

    Rammig, A.; Fleischer, K.; Lapola, D.; Holm, J.; Hoosbeek, M.

    2017-12-01

    Increasing atmospheric CO2 concentration is assumed to have a stimulating effect ("CO2 fertilization effect") on forest growth and resilience. Empirical evidence, however, for the existence and strength of such a tropical CO2 fertilization effect is scarce and thus a major impediment for constraining the uncertainties in Earth System Model projections. The implications of the tropical CO2 effect are far-reaching, as it strongly influences the global carbon and water cycle, and hence future global climate. In the scope of the Amazon Free Air CO2 Enrichment (FACE) experiment, we addressed these uncertainties by assessing the CO2 fertilization effect at ecosystem scale. AmazonFACE is the first FACE experiment in an old-growth, highly diverse tropical rainforest. Here, we present a priori model-based hypotheses for the experiment derived from a set of 12 ecosystem models. Model simulations identified key uncertainties in our understanding of limiting processes and derived model-based hypotheses of expected ecosystem responses to elevated CO2 that can directly be tested during the experiment. Ambient model simulations compared satisfactorily with in-situ measurements of ecosystem carbon fluxes, as well as carbon, nitrogen, and phosphorus stocks. Models consistently predicted an increase in photosynthesis with elevated CO2, which declined over time due to developing limitations. The conversion of enhanced photosynthesis into biomass, and hence ecosystem carbon sequestration, varied strongly among the models due to different assumptions on nutrient limitation. Models with flexible allocation schemes consistently predicted an increased investment in belowground structures to alleviate nutrient limitation, in turn accelerating turnover rates of soil organic matter. The models diverged on the prediction for carbon accumulation after 10 years of elevated CO2, mainly due to contrasting assumptions in their phosphorus cycle representation. These differences define the expected response ratio to elevated CO2 at the AmazonFACE site and identify priorities for experimental work and model development.

  15. Laboratory Photoionization Fronts in Nitrogen Gas: A Numerical Feasibility and Parameter Study

    NASA Astrophysics Data System (ADS)

    Gray, William J.; Keiter, P. A.; Lefevre, H.; Patterson, C. R.; Davis, J. S.; van Der Holst, B.; Powell, K. G.; Drake, R. P.

    2018-05-01

    Photoionization fronts play a dominant role in many astrophysical situations but remain difficult to achieve in a laboratory experiment. We present the results from a computational parameter study evaluating the feasibility of the photoionization experiment presented in the design paper by Drake et al. in which a photoionization front is generated in a nitrogen medium. The nitrogen gas density and the Planckian radiation temperature of the X-ray source define each simulation. Simulations modeled experiments in which the X-ray flux is generated by a laser-heated gold foil, suitable for experiments using many kJ of laser energy, and experiments in which the flux is generated by a “z-pinch” device, which implodes a cylindrical shell of conducting wires. The models are run using CRASH, our block-adaptive-mesh code for multimaterial radiation hydrodynamics. The radiative transfer model uses multigroup, flux-limited diffusion with 30 radiation groups. In addition, electron heat conduction is modeled using a single-group, flux-limited diffusion. In the theory, a photoionization front can exist only when the ratios of the electron recombination rate to the photoionization rate and the electron-impact ionization rate to the recombination rate lie in certain ranges. These ratios are computed for several ionization states of nitrogen. Photoionization fronts are found to exist for laser-driven models with moderate nitrogen densities (∼1021 cm‑3) and radiation temperatures above 90 eV. For “z-pinch”-driven models, lower nitrogen densities are preferred (<1021 cm‑3). We conclude that the proposed experiments are likely to generate photoionization fronts.

  16. Predicting mutant selection in competition experiments with ciprofloxacin-exposed Escherichia coli.

    PubMed

    Khan, David D; Lagerbäck, Pernilla; Malmberg, Christer; Kristoffersson, Anders N; Wistrand-Yuen, Erik; Sha, Cao; Cars, Otto; Andersson, Dan I; Hughes, Diarmaid; Nielsen, Elisabet I; Friberg, Lena E

    2018-03-01

    Predicting competition between antibiotic-susceptible wild-type (WT) and less susceptible mutant (MT) bacteria is valuable for understanding how drug concentrations influence the emergence of resistance. Pharmacokinetic/pharmacodynamic (PK/PD) models predicting the rate and extent of takeover of resistant bacteria during different antibiotic pressures can thus be a valuable tool in improving treatment regimens. The aim of this study was to evaluate a previously developed mechanism-based PK/PD model for its ability to predict in vitro mixed-population experiments with competition between Escherichia coli (E. coli) WT and three well-defined E. coli resistant MTs when exposed to ciprofloxacin. Model predictions for each bacterial strain and ciprofloxacin concentration were made for in vitro static and dynamic time-kill experiments measuring CFU (colony forming units)/mL up to 24 h with concentrations close to or below the minimum inhibitory concentration (MIC), as well as for serial passage experiments with concentrations well below the MIC measuring ratios between the two strains with flow cytometry. The model was found to reasonably well predict the initial bacterial growth and killing of most static and dynamic time-kill competition experiments without need for parameter re-estimation. With parameter re-estimation of growth rates, an adequate fit was also obtained for the 6-day serial passage competition experiments. No bacterial interaction in growth was observed. This study demonstrates the predictive capacity of a PK/PD model and further supports the application of PK/PD modelling for prediction of bacterial kill in different settings, including resistance selection. Copyright © 2017 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  17. Emulation of the MBM-MEDUSA model: exploring the sea level and the basin-to-shelf transfer influence on the system dynamics

    NASA Astrophysics Data System (ADS)

    Ermakov, Ilya; Crucifix, Michel; Munhoven, Guy

    2013-04-01

    Complex climate models require high computational burden. However, computational limitations may be avoided by using emulators. In this work we present several approaches for dynamical emulation (also called metamodelling) of the Multi-Box Model (MBM) coupled to the Model of Early Diagenesis in the Upper Sediment A (MEDUSA) that simulates the carbon cycle of the ocean and atmosphere [1]. We consider two experiments performed on the MBM-MEDUSA that explore the Basin-to-Shelf Transfer (BST) dynamics. In both experiments the sea level is varied according to a paleo sea level reconstruction. Such experiments are interesting because the BST is an important cause of the CO2 variation and the dynamics is potentially nonlinear. The output that we are interested in is the variation of the carbon dioxide partial pressure in the atmosphere over the Pleistocene. The first experiment considers that the BST is fixed constant during the simulation. In the second experiment the BST is interactively adjusted according to the sea level, since the sea level is the primary control of the growth and decay of coral reefs and other shelf carbon reservoirs. The main aim of the present contribution is to create a metamodel of the MBM-MEDUSA using the Dynamic Emulation Modelling methodology [2] and compare the results obtained using linear and non-linear methods. The first step in the emulation methodology used in this work is to identify the structure of the metamodel. In order to select an optimal approach for emulation we compare the results of identification obtained by the simple linear and more complex nonlinear models. In order to identify the metamodel in the first experiment the simple linear regression and the least-squares method is sufficient to obtain a 99,9% fit between the temporal outputs of the model and the metamodel. For the second experiment the MBM's output is highly nonlinear. In this case we apply nonlinear models, such as, NARX, Hammerstein model, and an 'ad-hoc' switching model. After the identification we perform the parameter mapping using spline interpolation and validate the emulator on a new set of parameters. References: [1] G. Munhoven, "Glacial-interglacial rain ratio changes: Implications for atmospheric CO2 and ocean-sediment interaction," Deep-Sea Res Pt II, vol. 54, pp. 722-746, 2007. [2] A. Castelletti et al., "A general framework for Dynamic Emulation Modelling in environmental problems," Environ Modell Softw, vol. 34, pp. 5-18, 2012.

  18. Recent progress in econophysics: Chaos, leverage, and business cycles as revealed by agent-based modeling and human experiments

    NASA Astrophysics Data System (ADS)

    Xin, Chen; Huang, Ji-Ping

    2017-12-01

    Agent-based modeling and controlled human experiments serve as two fundamental research methods in the field of econophysics. Agent-based modeling has been in development for over 20 years, but how to design virtual agents with high levels of human-like "intelligence" remains a challenge. On the other hand, experimental econophysics is an emerging field; however, there is a lack of experience and paradigms related to the field. Here, we review some of the most recent research results obtained through the use of these two methods concerning financial problems such as chaos, leverage, and business cycles. We also review the principles behind assessments of agents' intelligence levels, and some relevant designs for human experiments. The main theme of this review is to show that by combining theory, agent-based modeling, and controlled human experiments, one can garner more reliable and credible results on account of a better verification of theory; accordingly, this way, a wider range of economic and financial problems and phenomena can be studied.

  19. Contrasting Predictions of the Extended Comparator Hypothesis and Acquisition-Focused Models of Learning Concerning Retrospective Revaluation

    PubMed Central

    McConnell, Bridget L.; Urushihara, Kouji; Miller, Ralph R.

    2009-01-01

    Three conditioned suppression experiments with rats investigated contrasting predictions made by the extended comparator hypothesis and acquisition-focused models of learning, specifically, modified SOP and the revised Rescorla-Wagner model, concerning retrospective revaluation. Two target cues (X and Y) were partially reinforced using a stimulus relative validity design (i.e., AX-Outcome/ BX-No outcome/ CY-Outcome/ DY-No outcome), and subsequently one of the companion cues for each target was extinguished in compound (BC-No outcome). In Experiment 1, which used spaced trials for relative validity training, greater suppression was observed to target cue Y for which the excitatory companion cue had been extinguished relative to target cue X for which the nonexcitatory companion cue had been extinguished. Experiment 2 replicated these results in a sensory preconditioning preparation. Experiment 3 massed the trials during relative validity training, and the opposite pattern of data was observed. The results are consistent with the predictions of the extended comparator hypothesis. Furthermore, this set of experiments is unique in being able to differentiate between these models without invoking higher-order comparator processes. PMID:20141324

  20. Detonation failure characterization of non-ideal explosives

    NASA Astrophysics Data System (ADS)

    Janesheski, Robert S.; Groven, Lori J.; Son, Steven

    2012-03-01

    Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.

  1. The Place of Identity Dissonance and Emotional Motivations in Bio-Cultural Models of Religious Experience: A Report from the 19th Century.

    PubMed

    Powell, Adam

    2017-01-01

    Durham University's 'Hearing the Voice' project involves a multi-disciplinary exploration of hallucinatory-type phenomena in an attempt to revaluate and reframe discussions of these experiences. As part of this project, contemporaneous religious experiences (supernatural voices and visions) in the United States from the first half of the nineteenth century have been analysed, shedding light on the value and applicability of contemporary bio-cultural models of religious experience for such historical cases. In particular, this essay outlines four historical cases, seeking to utilise and to refine four theoretical models, including anthropologist Tanya Luhrmann's 'absorption hypothesis', by returning to something like William James' concern with 'discordant personalities'. Ultimately, the paper argues that emphasis on the role of identity dissonance must not be omitted from the analytical tools applied to these nineteenth-century examples, and perhaps should be retained for any study of religious experience generally.

  2. First order error corrections in common introductory physics experiments

    NASA Astrophysics Data System (ADS)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  3. Assessing the empirical validity of the "take-the-best" heuristic as a model of human probabilistic inference.

    PubMed

    Bröder, A

    2000-09-01

    The boundedly rational 'Take-The-Best" heuristic (TTB) was proposed by G. Gigerenzer, U. Hoffrage, and H. Kleinbölting (1991) as a model of fast and frugal probabilistic inferences. Although the simple lexicographic rule proved to be successful in computer simulations, direct empirical demonstrations of its adequacy as a psychological model are lacking because of several methodical problems. In 4 experiments with a total of 210 participants, this question was addressed. Whereas Experiment 1 showed that TTB is not valid as a universal hypothesis about probabilistic inferences, up to 28% of participants in Experiment 2 and 53% of participants in Experiment 3 were classified as TTB users. Experiment 4 revealed that investment costs for information seem to be a relevant factor leading participants to switch to a noncompensatory TTB strategy. The observed individual differences in strategy use imply the recommendation of an idiographic approach to decision-making research.

  4. Building adaptive connectionist-based controllers: review of experiments in human-robot interaction, collective robotics, and computational neuroscience

    NASA Astrophysics Data System (ADS)

    Billard, Aude

    2000-10-01

    This paper summarizes a number of experiments in biologically inspired robotics. The common feature to all experiments is the use of artificial neural networks as the building blocks for the controllers. The experiments speak in favor of using a connectionist approach for designing adaptive and flexible robot controllers, and for modeling neurological processes. I present 1) DRAMA, a novel connectionist architecture, which has general property for learning time series and extracting spatio-temporal regularities in multi-modal and highly noisy data; 2) Robota, a doll-shaped robot, which imitates and learns a proto-language; 3) an experiment in collective robotics, where a group of 4 to 15 Khepera robots learn dynamically the topography of an environment whose features change frequently; 4) an abstract, computational model of primate ability to learn by imitation; 5) a model for the control of locomotor gaits in a quadruped legged robot.

  5. Inverse estimation of parameters for an estuarine eutrophication model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less

  6. Conceptual-level workflow modeling of scientific experiments using NMR as a case study

    PubMed Central

    Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R

    2007-01-01

    Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870

  7. Perturbative tests of theoretical transport models using cold pulse and modulated electron cyclotron heating experiments

    NASA Astrophysics Data System (ADS)

    Kinsey, J. E.; Waltz, R. E.; DeBoo, J. C.

    1999-05-01

    It is difficult to discriminate between various tokamak transport models using standardized statistical measures to assess the goodness of fit with steady-state density and temperature profiles in tokamaks. This motivates consideration of transient transport experiments as a technique for testing the temporal response predicted by models. Results are presented comparing the predictions from the Institute for Fusion Studies—Princeton Plasma Physics Laboratory (IFS/PPPL), gyro-Landau-fluid (GLF23), Multi-mode (MM), Current Diffusive Ballooning Mode (CDBM), and Mixed-shear (MS) transport models against data from ohmic cold pulse and modulated electron cyclotron heating (ECH) experiments. In ohmically heated discharges with rapid edge cooling due to trace impurity injection, it is found that critical gradient models containing a strong temperature ratio (Ti/Te) dependence can exhibit behavior that is qualitatively consistent both spatially and temporally with experimental observation while depending solely on local parameters. On the DIII-D tokamak [J. L. Luxon and L. G. Davis, Fusion Technol. 8, 441 (1985)], off-axis modulated ECH experiments have been conducted in L-mode (low confinement mode) and the perturbed electron and ion temperature response to multiple heat pulses has been measured across the plasma core. Comparing the predicted Fourier phase of the temperature perturbations, it is found that no single model yielded agreement with both electron and ion phases for all cases. In general, it was found that the IFS/PPPL, GLF23, and MS models agreed well with the ion response, but not with the electron response. The CDBM and MM models agreed well with the electron response, but not with the ion response. For both types of transient experiments, temperature coupling between the electron and ion transport is found to be an essential feature needed in the models for reproducing the observed perturbative response.

  8. Conceptual-level workflow modeling of scientific experiments using NMR as a case study.

    PubMed

    Verdi, Kacy K; Ellis, Heidi Jc; Gryk, Michael R

    2007-01-30

    Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.

  9. Uncovering stability mechanisms in microbial ecosystems - combining microcosm experiments, computational modelling and ecological theory in a multidisciplinary approach

    NASA Astrophysics Data System (ADS)

    Worrich, Anja; König, Sara; Banitz, Thomas; Centler, Florian; Frank, Karin; Kästner, Matthias; Miltner, Anja; Thullner, Martin; Wick, Lukas

    2015-04-01

    Although bacterial degraders in soil are commonly exposed to fluctuating environmental conditions, the functional performance of the biodegradation processes can often be maintained by resistance and resilience mechanisms. However, there is still a gap in the mechanistic understanding of key factors contributing to the stability of such an ecosystem service. Therefore we developed an integrated approach combining microcosm experiments, simulation models and ecological theory to directly make use of the strengths of these disciplines. In a continuous interplay process, data, hypotheses, and central questions are exchanged between disciplines to initiate new experiments and models to ultimately identify buffer mechanisms and factors providing functional stability. We focus on drying and rewetting-cycles in soil ecosystems, which are a major abiotic driver for bacterial activity. Functional recovery of the system was found to depend on different spatial processes in the computational model. In particular, bacterial motility is a prerequisite for biodegradation if either bacteria or substrate are heterogeneously distributed. Hence, laboratory experiments focussing on bacterial dispersal processes were conducted and confirmed this finding also for functional resistance. Obtained results will be incorporated into the model in the next step. Overall, the combination of computational modelling and laboratory experiments identified spatial processes as the main driving force for functional stability in the considered system, and has proved a powerful methodological approach.

  10. Numerical modeling of the 2017 active seismic infrasound balloon experiment

    NASA Astrophysics Data System (ADS)

    Brissaud, Q.; Komjathy, A.; Garcia, R.; Cutts, J. A.; Pauken, M.; Krishnamoorthy, S.; Mimoun, D.; Jackson, J. M.; Lai, V. H.; Kedar, S.; Levillain, E.

    2017-12-01

    We have developed a numerical tool to propagate acoustic and gravity waves in a coupled solid-fluid medium with topography. It is a hybrid method between a continuous Galerkin and a discontinuous Galerkin method that accounts for non-linear atmospheric waves, visco-elastic waves and topography. We apply this method to a recent experiment that took place in the Nevada desert to study acoustic waves from seismic events. This experiment, developed by JPL and its partners, wants to demonstrate the viability of a new approach to probe seismic-induced acoustic waves from a balloon platform. To the best of our knowledge, this could be the only way, for planetary missions, to perform tomography when one faces challenging surface conditions, with high pressure and temperature (e.g. Venus), and thus when it is impossible to use conventional electronics routinely employed on Earth. To fully demonstrate the effectiveness of such a technique one should also be able to reconstruct the observed signals from numerical modeling. To model the seismic hammer experiment and the subsequent acoustic wave propagation, we rely on a subsurface seismic model constructed from the seismometers measurements during the 2017 Nevada experiment and an atmospheric model built from meteorological data. The source is considered as a Gaussian point source located at the surface. Comparison between the numerical modeling and the experimental data could help future mission designs and provide great insights into the planet's interior structure.

  11. A Contextual Work-Life Experiences Model to Understand Nurse Commitment and Turnover.

    PubMed

    Aluwihare-Samaranayake, Dilmi; Gellatly, Ian; Cummings, Greta; Ogilvie, Linda

    2018-05-17

    To present a discussion and model depicting most effecting work life experience contextual factors that influence commitment and turnover intentions for nurses in Sri Lanka. Increasing demand for nurses has made the retention of experienced, qualified nursing staff a priority for health care organizations and highlights the need to capture contextual work-life experiences that influence nurses' turnover decisions. Discussion paper. This discussion paper and model is based on our experiences and knowledge of Sri Lanka and represents an integration of classic turnover research and commitment theory and others published between 1958 - 2017, contextualized to reflect the reality faced by Sri Lanka nurses. The model presents a high-level view of intrinsic, extrinsic, personal and professional antecedents to nurse turnover where relevance can be used by researchers, policy makers, clinicians and educators to establish focused and limited scope models and examine comprehensive contexts. This model emphasizes the role that work-life experiences play to fortify (or weaken) nurses' motivation to remain committed to their organization, profession, family, and country. Understanding of contextual work-life influences on nurses' intent to stay should lead to evidence-based strategies that result in a higher number of nurses wanting to remain in the nursing profession and work in the health sector in Sri Lanka. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. Individual differences in transcranial electrical stimulation current density

    PubMed Central

    Russell, Michael J; Goodman, Theodore; Pierson, Ronald; Shepherd, Shane; Wang, Qiang; Groshong, Bennett; Wiley, David F

    2013-01-01

    Transcranial electrical stimulation (TCES) is effective in treating many conditions, but it has not been possible to accurately forecast current density within the complex anatomy of a given subject's head. We sought to predict and verify TCES current densities and determine the variability of these current distributions in patient-specific models based on magnetic resonance imaging (MRI) data. Two experiments were performed. The first experiment estimated conductivity from MRIs and compared the current density results against actual measurements from the scalp surface of 3 subjects. In the second experiment, virtual electrodes were placed on the scalps of 18 subjects to model simulated current densities with 2 mA of virtually applied stimulation. This procedure was repeated for 4 electrode locations. Current densities were then calculated for 75 brain regions. Comparison of modeled and measured external current in experiment 1 yielded a correlation of r = .93. In experiment 2, modeled individual differences were greatest near the electrodes (ten-fold differences were common), but simulated current was found in all regions of the brain. Sites that were distant from the electrodes (e.g. hypothalamus) typically showed two-fold individual differences. MRI-based modeling can effectively predict current densities in individual brains. Significant variation occurs between subjects with the same applied electrode configuration. Individualized MRI-based modeling should be considered in place of the 10-20 system when accurate TCES is needed. PMID:24285948

  13. A Community Mentoring Model for STEM Undergraduate Research Experiences

    ERIC Educational Resources Information Center

    Kobulnicky, Henry A.; Dale, Daniel A.

    2016-01-01

    This article describes a community mentoring model for UREs that avoids some of the common pitfalls of the traditional paradigm while harnessing the power of learning communities to provide young scholars a stimulating collaborative STEM research experience.

  14. The Carbon-Land Model Intercomparison Project (C-LAMP): A Model-Data Comparison System for Evaluation of Coupled Biosphere-Atmosphere Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forrest M; Randerson, Jim; Thornton, Peter E

    2009-01-01

    The need to capture important climate feebacks in general circulation models (GCMs) has resulted in new efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, now often referred to as Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results, suggesting that a more rigorous set of offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are warranted. The Carbon-Land Model Intercomparison Project (C-LAMP) providesmore » a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). C-LAMP provides feedback to the modeling community regarding model improvements and to the measurement community by suggesting new observational campaigns. C-LAMP Experiment 1 consists of a set of uncoupled simulations of terrestrial carbon models specifically designed to examine the ability of the models to reproduce surface carbon and energy fluxes at multiple sites and to exhibit the influence of climate variability, prescribed atmospheric carbon dioxide (CO{sub 2}), nitrogen (N) deposition, and land cover change on projections of terrestrial carbon fluxes during the 20th century. Experiment 2 consists of partially coupled simulations of the terrestrial carbon model with an active atmosphere model exchanging energy and moisture fluxes. In all experiments, atmospheric CO{sub 2} follows the prescribed historical trajectory from C{sup 4}MIP. In Experiment 2, the atmosphere model is forced with prescribed sea surface temperatures (SSTs) and corresponding sea ice concentrations from the Hadley Centre; prescribed CO{sub 2} is radiatively active; and land, fossil fuel, and ocean CO{sub 2} fluxes are advected by the model. Both sets of experiments have been performed using two different terrestrial biogeochemistry modules coupled to the Community Land Model version 3 (CLM3) in the Community Climate System Model version 3 (CCSM3): The CASA model of Fung, et al., and the carbon-nitrogen (CN) model of Thornton. Comparisons against Ameriflus site measurements, MODIS satellite observations, NOAA flask records, TRANSCOM inversions, and Free Air CO{sub 2} Enrichment (FACE) site measurements, and other datasets have been performed and are described in Randerson et al. (2009). The C-LAMP diagnostics package was used to validate improvements to CASA and CN for use in the next generation model, CLM4. It is hoped that this effort will serve as a prototype for an international carbon-cycle model benchmarking activity for models being used for the Inter-governmental Panel on Climate Change (IPCC) Fifth Assessment Report. More information about C-LAMP, the experimental protocol, performance metrics, output standards, and model-data comparisons from the CLM3-CASA and CLM3-CN models are available at http://www.climatemodeling.org/c-lamp.« less

  15. Validation and Application of Pharmacokinetic Models for Interspecies Extrapolations in Toxicity Risk Assessments of Volatile Organics

    DTIC Science & Technology

    1989-07-21

    formulation of physiologically-based pharmacokinetic models. Adult male Sprague-Dawley rats and male beagle dogs will be administered equal doses...experiments in the 0 dog . Physiologically-based pharmacokinetic models will be developed and validated for oral and inhalation exposures to halocarbons...of conducting experiments in dogs . The original physiolo ic model for the rat will be scaled up to predict halocarbon pharmacokinetics in the dog . The

  16. Cognitive Modeling of Video Game Player User Experience

    NASA Technical Reports Server (NTRS)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  17. Cosmic-Ray Background Flux Model Baed on a Gamma-Ray Large Area Space Telescope Baloon Flight Engineering

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4(pi) sr), the sensitive energy range of the instrument ((approx) 10 MeV to 100 GeV) and abundant components (proton, alpha, e(sup -), e(sup +), (mu)(sup -), (mu)(sup +) and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functions in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.

  18. Learning general phonological rules from distributional information: a computational model.

    PubMed

    Calamaro, Shira; Jarosz, Gaja

    2015-04-01

    Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles. Copyright © 2014 Cognitive Science Society, Inc.

  19. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  20. Similarity Theory of Withdrawn Water Temperature Experiment

    PubMed Central

    2015-01-01

    Selective withdrawal from a thermal stratified reservoir has been widely utilized in managing reservoir water withdrawal. Besides theoretical analysis and numerical simulation, model test was also necessary in studying the temperature of withdrawn water. However, information on the similarity theory of the withdrawn water temperature model remains lacking. Considering flow features of selective withdrawal, the similarity theory of the withdrawn water temperature model was analyzed theoretically based on the modification of governing equations, the Boussinesq approximation, and some simplifications. The similarity conditions between the model and the prototype were suggested. The conversion of withdrawn water temperature between the model and the prototype was proposed. Meanwhile, the fundamental theory of temperature distribution conversion was firstly proposed, which could significantly improve the experiment efficiency when the basic temperature of the model was different from the prototype. Based on the similarity theory, an experiment was performed on the withdrawn water temperature which was verified by numerical method. PMID:26065020

  1. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    PubMed

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.

  2. Modeling Simple Driving Tasks with a One-Boundary Diffusion Model

    PubMed Central

    Ratcliff, Roger; Strayer, David

    2014-01-01

    A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620

  3. On a basic model of circulatory, fluid, and electrolyte regulation in the human system based upon the model of Guyton

    NASA Technical Reports Server (NTRS)

    White, R. J.

    1973-01-01

    A detailed description of Guyton's model and modifications are provided. Also included are descriptions of several typical experiments which the model can simulate to illustrate the model's general utility. A discussion of the problems associated with the interfacing of the model to other models such as respiratory and thermal regulation models which is prime importance since these stimuli are not present in the current model is also included. A user's guide for the operation of the model on the Xerox Sigma 3 computer is provided and two programs are described. A verification plan and procedure for performing experiments is also presented.

  4. Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops

    USGS Publications Warehouse

    Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.

  5. Three-dimensional modeling of flow through fractured tuff at Fran Ridge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Ho, C.K.; Glass, RJ.

    1996-09-01

    Numerical studies have been made of an infiltration experiment at Fran Ridge using the TOUGH2 code to aid in the selection of computational models for performance assessment. The exercise investigates the capabilities of TOUGH2 to model transient flows through highly fractured tuff and provides a possible means of calibration. Two distinctly different conceptual models were used in the TOUGH2 code, the dual permeability model and the equivalent continuum model. The infiltration test modeled involved the infiltration of dyed ponded water for 36 minutes. The 205 gallon infiltration of water observed in the experiment was subsequently modeled using measured Fran Ridgemore » fracture frequencies, and a specified fracture aperture of 285 {micro}m. The dual permeability formulation predicted considerable infiltration along the fracture network, which was in agreement with the experimental observations. As expected, al fracture penetration of the infiltrating water was calculated using the equivalent continuum model, thus demonstrating that this model is not appropriate for modeling the highly transient experiment. It is therefore recommended that the dual permeability model be given priority when computing high-flux infiltration for use in performance assessment studies.« less

  6. Three-dimensional modeling of flow through fractured tuff at Fran Ridge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Ho, C.K.; Glass, R.J.

    1996-01-01

    Numerical studies have been made of an infiltration experiment at Fran Ridge using the TOUGH2 code to aid in the selection of computational models for performance assessment. The exercise investigates the capabilities of TOUGH2 to model transient flows through highly fractured tuff and provides a possible means of calibration. Two distinctly different conceptual models were used in the TOUGH2 code, the dual permeability model and the equivalent continuum model. The infiltration test modeled involved the infiltration of dyed ponded water for 36 minutes. The 205 gallon filtration of water observed in the experiment was subsequently modeled using measured Fran Ridgemore » fracture frequencies, and a specified fracture aperture of 285 {mu}m. The dual permeability formulation predicted considerable infiltration along the fracture network, which was in agreement with the experimental observations. As expected, minimal fracture penetration of the infiltrating water was calculated using the equivalent continuum model, thus demonstrating that this model is not appropriate for modeling the highly transient experiment. It is therefore recommended that the dual permeability model be given priority when computing high-flux infiltration for use in performance assessment studies.« less

  7. Déjà vu experiences are rarely associated with pathological dissociation.

    PubMed

    Adachi, Naoto; Akanuma, Nozomi; Akanu, Nozomi; Adachi, Takuya; Takekawa, Yoshikazu; Adachi, Yasushi; Ito, Masumi; Ikeda, Hiroshi

    2008-05-01

    We investigated the relation between déjà vu and dissociative experiences in nonclinical subjects. In 227 adult volunteers, déjà vu and dissociative experiences were evaluated by means of the inventory of déjà vu experiences assessment and dissociative experiences scale (DES). Déjà vu experiences occurred in 162 (71.4%) individuals. In univariate correlation analysis, the frequency of déjà vu experiences, as well as 5 other inventory of déjà vu experiences assessment symptoms and age at the time of evaluation, correlated significantly with the DES score. After exclusion of intercorrelative effects using multiple regression analysis, déjà vu experiences did not remain in the model. The DES score was best correlated with a model that included age, jamais vu, depersonalization, and precognitive dreams. Two indices for pathological dissociation (DES-taxon and DES > or = 30) were not associated with déjà vu experiences. Our findings suggest that déjà vu experiences are unlikely to be core pathological dissociative experiences.

  8. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  9. Rising temperatures reduce global wheat production

    USDA-ARS?s Scientific Manuscript database

    Crop models are essential to assess the threat of climate change for food production but have not been systematically tested against temperature experiments, despite demonstrated uncertainty in temperature response. Herein, we compare 30 different wheat crop models against field experiments in which...

  10. World Ocean Circulation Experiment

    NASA Technical Reports Server (NTRS)

    Clarke, R. Allyn

    1992-01-01

    The oceans are an equal partner with the atmosphere in the global climate system. The World Ocean Circulation Experiment is presently being implemented to improve ocean models that are useful for climate prediction both by encouraging more model development but more importantly by providing quality data sets that can be used to force or to validate such models. WOCE is the first oceanographic experiment that plans to generate and to use multiparameter global ocean data sets. In order for WOCE to succeed, oceanographers must establish and learn to use more effective methods of assembling, quality controlling, manipulating and distributing oceanographic data.

  11. Experiences with a high-blockage model tested in the NASA Ames 12-foot pressure wind tunnel

    NASA Technical Reports Server (NTRS)

    Coder, D. W.

    1984-01-01

    Representation of the flow around full-scale ships was sought in the subsonic wind tunnels in order to a Hain Reynolds numbers as high as possible. As part of the quest to attain the largest possible Reynolds number, large models with high blockage are used which result in significant wall interference effects. Some experiences with such a high blockage model tested in the NASA Ames 12-foot pressure wind tunnel are summarized. The main results of the experiment relating to wind tunnel wall interference effects are also presented.

  12. The principle of relativity, superluminality and EPR experiments. "Riserratevi sotto coverta ..."

    NASA Astrophysics Data System (ADS)

    Cocciaro, B.

    2015-07-01

    The principle of relativity claims the invariance of the results for experiments carried out in inertial reference frames if the system under examination is not in interaction with the outside world. In this paper it is analysed a model suggested by J. S. Bell, and later developed by P. H. Eberhard, D. Bohm and B. Hiley on the basis of which the EPR correlations would be due to superluminal exchanges between the various parts of the entangled system under examination. In the model the existence of a privileged reference frame (PF) for the propagation of superluminal signals is hypothesized so that these superluminal signals may not give rise to causal paradoxes. According to this model, in an EPR experiment, the entangled system interacts with the outer world since the result of the experiment depends on an entity (the reference frame PF) that is not prepared by the experimenter. The existence of this privileged reference frame makes the model non invariant for Lorentz transformations. In this paper, in opposition to what claimed by the authors mentioned above, the perfect compatibility of the model with the theory of relativity is strongly maintained since, as already said, the principle of relativity does not require that the results of experiments carried out on systems interacting with the outside world should be invariant.

  13. Modelling landscape evolution at the flume scale

    NASA Astrophysics Data System (ADS)

    Cheraghi, Mohsen; Rinaldo, Andrea; Sander, Graham C.; Barry, D. Andrew

    2017-04-01

    The ability of a large-scale Landscape Evolution Model (LEM) to simulate the soil surface morphological evolution as observed in a laboratory flume (1-m × 2-m surface area) was investigated. The soil surface was initially smooth, and was subjected to heterogeneous rainfall in an experiment designed to avoid rill formation. Low-cohesive fine sand was placed in the flume while the slope and relief height were 5 % and 20 cm, respectively. Non-uniform rainfall with an average intensity of 85 mm h-1 and a standard deviation of 26 % was applied to the sediment surface for 16 h. We hypothesized that the complex overland water flow can be represented by a drainage discharge network, which was calculated via the micro-morphology and the rainfall distribution. Measurements included high resolution Digital Elevation Models that were captured at intervals during the experiment. The calibrated LEM captured the migration of the main flow path from the low precipitation area into the high precipitation area. Furthermore, both model and experiment showed a steep transition zone in soil elevation that moved upstream during the experiment. We conclude that the LEM is applicable under non-uniform rainfall and in the absence of surface incisions, thereby extending its applicability beyond that shown in previous applications. Keywords: Numerical simulation, Flume experiment, Particle Swarm Optimization, Sediment transport, River network evolution model.

  14. Ultrasound Flow Mapping for the Investigation of Crystal Growth.

    PubMed

    Thieme, Norman; Bonisch, Paul; Meier, Dagmar; Nauber, Richard; Buttner, Lars; Dadzis, Kaspars; Patzold, Olf; Sylla, Lamine; Czarske, Jurgen

    2017-04-01

    A high energy conversion and cost efficiency are keys for the transition to renewable energy sources, e.g., solar cells. The efficiency of multicrystalline solar cells can be improved by enhancing the understanding of its crystallization process, especially the directional solidification. In this paper, a novel measurement system for the characterization of flow phenomena and solidification processes in low-temperature model experiments on the basis of ultrasound (US) Doppler velocimetry is described. It captures turbulent flow phenomena in two planes with a frame rate of 3.5 Hz and tracks the shape of the solid-liquid interface during multihour experiments. Time-resolved flow mapping is performed using four linear US arrays with a total of 168 transducer elements. Long duration measurements are enabled through an online, field-programmable gate array (FPGA)-based signal processing. Nine single US transducers allow for in situ tracking of a solid-liquid interface. Results of flow and solidification experiments in the model experiment are presented and compared with numerical simulation. The potential of the developed US system for measuring turbulent flows and for tracking the solidification front during a directional crystallization process is demonstrated. The results of the model experiments are in good agreement with numerical calculations and can be used for the validation of numerical models, especially the selection of the turbulence model.

  15. SHEEP AS AN EXPERIMENTAL MODEL FOR BIOMATERIAL IMPLANT EVALUATION

    PubMed Central

    SARTORETTO, SUELEN CRISTINA; UZEDA, MARCELO JOSÉ; MIGUEL, FÚLVIO BORGES; NASCIMENTO, JHONATHAN RAPHAELL; ASCOLI, FABIO; CALASANS-MAIA, MÔNICA DIUANA

    2016-01-01

    ABSTRACT Objective: Based on a literature review and on our own experience, this study proposes sheep as an experimental model to evaluate the bioactive capacity of bone substitute biomaterials, dental implant systems and orthopedics devices. The literature review covered relevant databases available on the Internet from 1990 until to date, and was supplemented by our own experience. Methods: For its resemblance in size and weight to humans, sheep are quite suitable for use as an experimental model. However, information about their utility as an experimental model is limited. The different stages involving sheep experiments were discussed, including the care during breeding and maintenance of the animals obtaining specimens for laboratory processing, and highlighting the unnecessary euthanasia of animals at the end of study, in accordance to the guidelines of the 3Rs Program. Results: All experiments have been completed without any complications regarding the animals and allowed us to evaluate hypotheses and explain their mechanisms. Conclusion: The sheep is an excellent animal model for evaluation of biomaterial for bone regeneration and dental implant osseointegration. From an ethical point of view, one sheep allows for up to 12 implants per animal, permitting to keep them alive at the end of the experiments. Level of Evidence II, Retrospective Study. PMID:28149193

  16. The boundaries of instance-based learning theory for explaining decisions from experience.

    PubMed

    Gonzalez, Cleotilde

    2013-01-01

    Most demonstrations of how people make decisions in risky situations rely on decisions from description, where outcomes and their probabilities are explicitly stated. But recently, more attention has been given to decisions from experience where people discover these outcomes and probabilities through exploration. More importantly, risky behavior depends on how decisions are made (from description or experience), and although prospect theory explains decisions from description, a comprehensive model of decisions from experience is yet to be found. Instance-based learning theory (IBLT) explains how decisions are made from experience through interactions with dynamic environments (Gonzalez et al., 2003). The theory has shown robust explanations of behavior across multiple tasks and contexts, but it is becoming unclear what the theory is able to explain and what it does not. The goal of this chapter is to start addressing this problem. I will introduce IBLT and a recent cognitive model based on this theory: the IBL model of repeated binary choice; then I will discuss the phenomena that the IBL model explains and those that the model does not. The argument is for the theory's robustness but also for clarity in terms of concrete effects that the theory can or cannot account for. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Tropical forest response to elevated CO2: Model-experiment integration at the AmazonFACE site.

    NASA Astrophysics Data System (ADS)

    Frankenberg, C.; Berry, J. A.; Guanter, L.; Joiner, J.

    2014-12-01

    The terrestrial biosphere's response to current and future elevated atmospheric carbon dioxide (eCO2) is a large source of uncertainty in future projections of the C cycle, climate and ecosystem functioning. In particular, the sensitivity of tropical rainforest ecosystems to eCO­2 is largely unknown even though the importance of tropical forests for biodiversity, carbon storage and regional and global climate feedbacks is unambiguously recognized. The AmazonFACE (Free-Air Carbon Enrichment) project will be the first ecosystem scale eCO2 experiment undertaken in the tropics, as well as the first to be undertaken in a mature forest. AmazonFACE provides the opportunity to integrate ecosystem modeling with experimental observations right from the beginning of the experiment, harboring a two-way exchange, i.e. models provide hypotheses to be tested, and observations deliver the crucial data to test and improve ecosystem models. We present preliminary exploration of observed and expected process responses to eCO2 at the AmazonFACE site from the dynamic global vegetation model LPJ-GUESS, highlighting opportunities and pitfalls for model integration of tropical FACE experiments. The preliminary analysis provides baseline hypotheses, which are to be further developed with a follow-up multiple model inter-comparison. The analysis builds on the recently undertaken FACE-MDS (Model-Data Synthesis) project, which was applied to two temperate FACE experiments and exceeds the traditional focus on comparing modeled end-target output. The approach has proven successful in identifying well (and less well) represented processes in models, which are separated for six clusters also here; (1) Carbon fluxes, (2) Carbon pools, (3) Energy balance, (4) Hydrology, (5) Nutrient cycling, and (6) Population dynamics. Simulation performance of observed conditions at the AmazonFACE site (a.o. from Manaus K34 eddy flux tower) will highlight process-based model deficiencies, and aid the separation of uncertainties arising from general ecosystem responses and those responses related to eCO2.

  18. Tropical forest response to elevated CO2: Model-experiment integration at the AmazonFACE site.

    NASA Astrophysics Data System (ADS)

    Fleischer, K.

    2015-12-01

    The terrestrial biosphere's response to current and future elevated atmospheric carbon dioxide (eCO2) is a large source of uncertainty in future projections of the C cycle, climate and ecosystem functioning. In particular, the sensitivity of tropical rainforest ecosystems to eCO­2 is largely unknown even though the importance of tropical forests for biodiversity, carbon storage and regional and global climate feedbacks is unambiguously recognized. The AmazonFACE (Free-Air Carbon Enrichment) project will be the first ecosystem scale eCO2 experiment undertaken in the tropics, as well as the first to be undertaken in a mature forest. AmazonFACE provides the opportunity to integrate ecosystem modeling with experimental observations right from the beginning of the experiment, harboring a two-way exchange, i.e. models provide hypotheses to be tested, and observations deliver the crucial data to test and improve ecosystem models. We present preliminary exploration of observed and expected process responses to eCO2 at the AmazonFACE site from the dynamic global vegetation model LPJ-GUESS, highlighting opportunities and pitfalls for model integration of tropical FACE experiments. The preliminary analysis provides baseline hypotheses, which are to be further developed with a follow-up multiple model inter-comparison. The analysis builds on the recently undertaken FACE-MDS (Model-Data Synthesis) project, which was applied to two temperate FACE experiments and exceeds the traditional focus on comparing modeled end-target output. The approach has proven successful in identifying well (and less well) represented processes in models, which are separated for six clusters also here; (1) Carbon fluxes, (2) Carbon pools, (3) Energy balance, (4) Hydrology, (5) Nutrient cycling, and (6) Population dynamics. Simulation performance of observed conditions at the AmazonFACE site (a.o. from Manaus K34 eddy flux tower) will highlight process-based model deficiencies, and aid the separation of uncertainties arising from general ecosystem responses and those responses related to eCO2.

  19. Testing Two Path Models to Explore Relationships between Students' Experiences of the Teaching-Learning Environment, Approaches to Learning and Academic Achievement

    ERIC Educational Resources Information Center

    Karagiannopoulou, Evangelia; Milienos, Fotios S.

    2015-01-01

    The study explores the relationships between students' experiences of the teaching-learning environment and their approaches to learning, and the effects of these variables on academic achievement. Two three-stage models were tested with structural equation modelling techniques. The "Approaches and Study Skills Inventory for Students"…

  20. The SIOP Model: Transforming the Experiences of College Professors. Part I. Lesson Planning, Building Background, and Comprehensible Input

    ERIC Educational Resources Information Center

    Salcedo, Diana M.

    2010-01-01

    This article, the first of two, presents the introduction, context, and analysis of professor experiences in an on-going research project for implementing a new educational model in a bilingual teacher's college in Bogotá, Colombia. The model, the sheltered instruction observation protocol (SIOP) promotes eight components for a bilingual education…

  1. Metal powder absorptivity: Modeling and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boley, C. D.; Mitchell, S. C.; Rubenchik, A. M.

    Here, we present results of numerical modeling and direct calorimetric measurements of the powder absorptivity for a number of metals. The modeling results generally correlate well with experiment. We show that the powder absorptivity is determined, to a great extent, by the absorptivity of a flat surface at normal incidence. Our results allow the prediction of the powder absorptivity from normal flat-surface absorptivity measurements.

  2. Metal powder absorptivity: Modeling and experiment

    DOE PAGES

    Boley, C. D.; Mitchell, S. C.; Rubenchik, A. M.; ...

    2016-08-10

    Here, we present results of numerical modeling and direct calorimetric measurements of the powder absorptivity for a number of metals. The modeling results generally correlate well with experiment. We show that the powder absorptivity is determined, to a great extent, by the absorptivity of a flat surface at normal incidence. Our results allow the prediction of the powder absorptivity from normal flat-surface absorptivity measurements.

  3. Combining Experiments and Simulation of Gas Absorption for Teaching Mass Transfer Fundamentals: Removing CO2 from Air Using Water and NaOH

    ERIC Educational Resources Information Center

    Clark, William M.; Jackson, Yaminah Z.; Morin, Michael T.; Ferraro, Giacomo P.

    2011-01-01

    Laboratory experiments and computer models for studying the mass transfer process of removing CO2 from air using water or dilute NaOH solution as absorbent are presented. Models tie experiment to theory and give a visual representation of concentration profiles and also illustrate the two-film theory and the relative importance of various…

  4. Working memory for braille is shaped by experience

    PubMed Central

    Scherzer, Peter; Viau, Robert; Voss, Patrice; Lepore, Franco

    2011-01-01

    Tactile working memory was found to be more developed in completely blind (congenital and acquired) than in semi-sighted subjects, indicating that experience plays a crucial role in shaping working memory. A model of working memory, adapted from the classical model proposed by Baddeley and Hitch1 and Baddeley2 is presented where the connection strengths of a highly cross-modal network are altered through experience. PMID:21655448

  5. NBC Hazard Prediction Model Capability Analysis

    DTIC Science & Technology

    1999-09-01

    tactical units surveyed, only the 82nd Airborne Division indicated any real experience with either model. The tactical units surveyed did use some form...Tracer Experiment (1987) and ETEX =European Tracer Experiment (1994). 22 These data sets include Phase I Dugway data, the Prairie Grass data set...I 8 hr) HPAC Different scales shown swru. Doll ~ (1.ean) Tolll GD 111:3-Stp-88 2J:OOL (I.DOin) ... 1 .... .... ... ..... ,l

  6. Wave-Sediment Interaction in Muddy Environments: A Field Experiment

    DTIC Science & Technology

    2008-01-01

    project includes a field experiment on the Atchafalaya shelf, Louisiana, in Years 1 and 2 (2007-2008) and a data analysis and modeling effort in Year 3...2008), in collaboration with other researchers funded by ONR CG program. The pilot experiment has tested the instrumentation and data analysis ...1993; Foda et al., 1993). With the exception of liquefaction processes, these models assume a single, well­ defined mud phase. However

  7. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    PubMed Central

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272

  8. Acoustic propagation in a thermally stratified atmosphere

    NASA Technical Reports Server (NTRS)

    Vanmoorhem, W. K.

    1985-01-01

    This report describes the activities during the fifth six month period of the investigation of acoustic propagation in the atmosphere with a realistic temperature profile. Progress has been achieved in two major directions: comparisons between the lapse model and experimental data taken by NASA during the second tower experiment, and development of a model propagation in an inversion. Data from the second tower experiment became available near the end of 1984 and some comparisons have been carried out, but this work is not complete. Problems with the temperature profiler during the experiment have produced temperature profiles that are difficult to fit the assumed variation of temperature with height, but in cases where reasonable fits have been obtained agreement between the model and the experiments are close. The major weaknesses in the model appear to be the presence of discontinuities in some regions, the low sound levels predicted near the source height, and difficulties with the argument of the Hankel function being outside the allowable range. Work on the inversion model has progressed slowly, and the rays for that case are discussed along with a simple energy conservation model of sound level enhancement in the inversion case.

  9. Results of the Greenland Ice Sheet Model Initialisation Experiments ISMIP6 - initMIP-Greenland

    NASA Astrophysics Data System (ADS)

    Goelzer, H.; Nowicki, S.; Edwards, T.; Beckley, M.; Abe-Ouchi, A.; Aschwanden, A.; Calov, R.; Gagliardini, O.; Gillet-chaulet, F.; Golledge, N. R.; Gregory, J. M.; Greve, R.; Humbert, A.; Huybrechts, P.; Larour, E. Y.; Lipscomb, W. H.; Le ´h, S.; Lee, V.; Kennedy, J. H.; Pattyn, F.; Payne, A. J.; Rodehacke, C. B.; Rückamp, M.; Saito, F.; Schlegel, N.; Seroussi, H. L.; Shepherd, A.; Sun, S.; Vandewal, R.; Ziemen, F. A.

    2016-12-01

    Earlier large-scale Greenland ice sheet sea-level projections e.g. those run during ice2sea and SeaRISE initiatives have shown that ice sheet initialisation can have a large effect on the projections and gives rise to important uncertainties. The goal of this intercomparison exercise (initMIP-Greenland) is to compare, evaluate and improve the initialization techniques used in the ice sheet modeling community and to estimate the associated uncertainties. It is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of 1) the initial present-day state of the ice sheet and 2) the response in two schematic forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss final results of the intercomparison and highlight important uncertainties with respect to projections of the Greenland ice sheet sea-level contribution.

  10. On the Bayesian Treed Multivariate Gaussian Process with Linear Model of Coregionalization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lin, Guang

    2015-02-01

    The Bayesian treed Gaussian process (BTGP) has gained popularity in recent years because it provides a straightforward mechanism for modeling non-stationary data and can alleviate computational demands by fitting models to less data. The extension of BTGP to the multivariate setting requires us to model the cross-covariance and to propose efficient algorithms that can deal with trans-dimensional MCMC moves. In this paper we extend the cross-covariance of the Bayesian treed multivariate Gaussian process (BTMGP) to that of linear model of Coregionalization (LMC) cross-covariances. Different strategies have been developed to improve the MCMC mixing and invert smaller matrices in the Bayesianmore » inference. Moreover, we compare the proposed BTMGP with existing multiple BTGP and BTMGP in test cases and multiphase flow computer experiment in a full scale regenerator of a carbon capture unit. The use of the BTMGP with LMC cross-covariance helped to predict the computer experiments relatively better than existing competitors. The proposed model has a wide variety of applications, such as computer experiments and environmental data. In the case of computer experiments we also develop an adaptive sampling strategy for the BTMGP with LMC cross-covariance function.« less

  11. Dynamic and impact contact mechanics of geologic materials: Grain-scale experiments and modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, David M.; Hopkins, Mark A.; Ketcham, Stephen A.

    2013-06-18

    High fidelity treatments of the generation and propagation of seismic waves in naturally occurring granular materials is becoming more practical given recent advancements in our ability to model complex particle shapes and their mechanical interaction. Of particular interest are the grain-scale processes that are activated by impact events and the characteristics of force transmission through grain contacts. To address this issue, we have developed a physics based approach that involves laboratory experiments to quantify the dynamic contact and impact behavior of granular materials and incorporation of the observed behavior indiscrete element models. The dynamic experiments do not involve particle damagemore » and emphasis is placed on measured values of contact stiffness and frictional loss. The normal stiffness observed in dynamic contact experiments at low frequencies (e.g., 10 Hz) are shown to be in good agreement with quasistatic experiments on quartz sand. The results of impact experiments - which involve moderate to extensive levels of particle damage - are presented for several types of naturally occurring granular materials (several quartz sands, magnesite and calcium carbonate ooids). Implementation of the experimental findings in discrete element models is discussed and the results of impact simulations involving up to 5 Multiplication-Sign 105 grains are presented.« less

  12. A comparison of two Stokes ice sheet models applied to the Marine Ice Sheet Model Intercomparison Project for plan view models (MISMIP3d)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Tong; Price, Stephen F.; Ju, Lili

    Here, we present a comparison of the numerics and simulation results for two "full" Stokes ice sheet models, FELIX-S (Leng et al. 2012) and Elmer/Ice. The models are applied to the Marine Ice Sheet Model Intercomparison Project for plan view models (MISMIP3d). For the diagnostic experiment (P75D) the two models give similar results (< 2 % difference with respect to along-flow velocities) when using identical geometries and computational meshes, which we interpret as an indication of inherent consistencies and similarities between the two models. For the standard (Stnd), P75S, and P75R prognostic experiments, we find that FELIX-S (Elmer/Ice) grounding linesmore » are relatively more retreated (advanced), results that are consistent with minor differences observed in the diagnostic experiment results and that we show to be due to different choices in the implementation of basal boundary conditions in the two models. While we are not able to argue for the relative favorability of either implementation, we do show that these differences decrease with increasing horizontal (i.e., both along- and across-flow) grid resolution and that grounding-line positions for FELIX-S and Elmer/Ice converge to within the estimated truncation error for Elmer/Ice. Stokes model solutions are often treated as an accuracy metric in model intercomparison experiments, but computational cost may not always allow for the use of model resolution within the regime of asymptotic convergence. In this case, we propose that an alternative estimate for the uncertainty in the grounding-line position is the span of grounding-line positions predicted by multiple Stokes models.« less

  13. A comparison of two Stokes ice sheet models applied to the Marine Ice Sheet Model Intercomparison Project for plan view models (MISMIP3d)

    DOE PAGES

    Zhang, Tong; Price, Stephen F.; Ju, Lili; ...

    2017-01-25

    Here, we present a comparison of the numerics and simulation results for two "full" Stokes ice sheet models, FELIX-S (Leng et al. 2012) and Elmer/Ice. The models are applied to the Marine Ice Sheet Model Intercomparison Project for plan view models (MISMIP3d). For the diagnostic experiment (P75D) the two models give similar results (< 2 % difference with respect to along-flow velocities) when using identical geometries and computational meshes, which we interpret as an indication of inherent consistencies and similarities between the two models. For the standard (Stnd), P75S, and P75R prognostic experiments, we find that FELIX-S (Elmer/Ice) grounding linesmore » are relatively more retreated (advanced), results that are consistent with minor differences observed in the diagnostic experiment results and that we show to be due to different choices in the implementation of basal boundary conditions in the two models. While we are not able to argue for the relative favorability of either implementation, we do show that these differences decrease with increasing horizontal (i.e., both along- and across-flow) grid resolution and that grounding-line positions for FELIX-S and Elmer/Ice converge to within the estimated truncation error for Elmer/Ice. Stokes model solutions are often treated as an accuracy metric in model intercomparison experiments, but computational cost may not always allow for the use of model resolution within the regime of asymptotic convergence. In this case, we propose that an alternative estimate for the uncertainty in the grounding-line position is the span of grounding-line positions predicted by multiple Stokes models.« less

  14. An effective hierarchical model for the biomolecular covalent bond: an approach integrating artificial chemistry and an actual terrestrial life system.

    PubMed

    Oohashi, Tsutomu; Ueno, Osamu; Maekawa, Tadao; Kawai, Norie; Nishina, Emi; Honda, Manabu

    2009-01-01

    Under the AChem paradigm and the programmed self-decomposition (PSD) model, we propose a hierarchical model for the biomolecular covalent bond (HBCB model). This model assumes that terrestrial organisms arrange their biomolecules in a hierarchical structure according to the energy strength of their covalent bonds. It also assumes that they have evolutionarily selected the PSD mechanism of turning biological polymers (BPs) into biological monomers (BMs) as an efficient biomolecular recycling strategy We have examined the validity and effectiveness of the HBCB model by coordinating two complementary approaches: biological experiments using existent terrestrial life, and simulation experiments using an AChem system. Biological experiments have shown that terrestrial life possesses a PSD mechanism as an endergonic, genetically regulated process and that hydrolysis, which decomposes a BP into BMs, is one of the main processes of such a mechanism. In simulation experiments, we compared different virtual self-decomposition processes. The virtual species in which the self-decomposition process mainly involved covalent bond cleavage from a BP to BMs showed evolutionary superiority over other species in which the self-decomposition process involved cleavage from BP to classes lower than BM. These converging findings strongly support the existence of PSD and the validity and effectiveness of the HBCB model.

  15. Reputation Effects in Social Networks Do Not Promote Cooperation: An Experimental Test of the Raub & Weesie Model.

    PubMed

    Corten, Rense; Rosenkranz, Stephanie; Buskens, Vincent; Cook, Karen S

    2016-01-01

    Despite the popularity of the notion that social cohesion in the form of dense social networks promotes cooperation in Prisoner's Dilemmas through reputation, very little experimental evidence for this claim exists. We address this issue by testing hypotheses from one of the few rigorous game-theoretic models on this topic, the Raub & Weesie model, in two incentivized lab experiments. In the experiments, 156 subjects played repeated two-person PDs in groups of six. In the "atomized interactions" condition, subjects were only informed about the outcomes of their own interactions, while in the "embedded" condition, subjects were informed about the outcomes of all interactions in their group, allowing for reputation effects. The design of the experiments followed the specification of the RW model as closely as possible. For those aspects of the model that had to be modified to allow practical implementation in an experiment, we present additional analyses that show that these modifications do not affect the predictions. Contrary to expectations, we do not find that cooperation is higher in the embedded condition than in the atomized interaction. Instead, our results are consistent with an interpretation of the RW model that includes random noise, or with learning models of cooperation in networks.

  16. The organization of an autonomous learning system

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    The organization of systems that learn from experience is examined, human beings and animals being prime examples of such systems. How is their information processing organized. They build an internal model of the world and base their actions on the model. The model is dynamic and predictive, and it includes the systems' own actions and their effects. In modeling such systems, a large pattern of features represents a moment of the system's experience. Some of the features are provided by the system's senses, some control the system's motors, and the rest have no immediate external significance. A sequence of such patterns then represents the system's experience over time. By storing such sequences appropriately in memory, the system builds a world model based on experience. In addition to the essential function of memory, fundamental roles are played by a sensory system that makes raw information about the world suitable for memory storage and by a motor system that affects the world. The relation of sensory and motor systems to the memory is discussed, together with how favorable actions can be learned and unfavorable actions can be avoided. Results in classical learning theory are explained in terms of the model, more advanced forms of learning are discussed, and the relevance of the model to the frame problem of robotics is examined.

  17. Next Generation Climate Change Experiments Needed to Advance Knowledge and for Assessment of CMIP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katzenberger, John; Arnott, James; Wright, Alyson

    2014-10-30

    The Aspen Global Change Institute hosted a technical science workshop entitled, “Next generation climate change experiments needed to advance knowledge and for assessment of CMIP6,” on August 4-9, 2013 in Aspen, CO. Jerry Meehl (NCAR), Richard Moss (PNNL), and Karl Taylor (LLNL) served as co-chairs for the workshop which included the participation of 32 scientists representing most of the major climate modeling centers for a total of 160 participant days. In August 2013, AGCI gathered a high level meeting of representatives from major climate modeling centers around the world to assess achievements and lessons learned from the most recent generationmore » of coordinated modeling experiments known as the Coupled Model Intercomparison Project – 5 (CMIP5) as well as to scope out the science questions and coordination structure desired for the next anticipated phase of modeling experiments called CMIP6. The workshop allowed for reflection on the coordination of the CMIP5 process as well as intercomparison of model results, such as were assessed in the most recent IPCC 5th Assessment Report, Working Group 1. For example, this slide from Masahiro Watanabe examines performance on a range of models capturing Atlantic Meridional Overturning Circulation (AMOC).« less

  18. Reputation Effects in Social Networks Do Not Promote Cooperation: An Experimental Test of the Raub & Weesie Model

    PubMed Central

    Corten, Rense; Rosenkranz, Stephanie; Buskens, Vincent; Cook, Karen S.

    2016-01-01

    Despite the popularity of the notion that social cohesion in the form of dense social networks promotes cooperation in Prisoner’s Dilemmas through reputation, very little experimental evidence for this claim exists. We address this issue by testing hypotheses from one of the few rigorous game-theoretic models on this topic, the Raub & Weesie model, in two incentivized lab experiments. In the experiments, 156 subjects played repeated two-person PDs in groups of six. In the “atomized interactions” condition, subjects were only informed about the outcomes of their own interactions, while in the “embedded” condition, subjects were informed about the outcomes of all interactions in their group, allowing for reputation effects. The design of the experiments followed the specification of the RW model as closely as possible. For those aspects of the model that had to be modified to allow practical implementation in an experiment, we present additional analyses that show that these modifications do not affect the predictions. Contrary to expectations, we do not find that cooperation is higher in the embedded condition than in the atomized interaction. Instead, our results are consistent with an interpretation of the RW model that includes random noise, or with learning models of cooperation in networks. PMID:27366907

  19. Structural Equation Modeling of Cultural Competence of Nurses Caring for Foreign Patients.

    PubMed

    Ahn, Jung-Won

    2017-03-01

    This study aimed to construct and test a hypothetical model including factors related to the cultural competence of nurses caring for foreign patients. The transcultural nursing immersion experience model and anxiety/uncertainty management theory were used to verify the paths between the variables. The exogenous variables were multicultural experience, ethnocentric attitude, and organizational cultural competence support. The endogenous variables were intercultural anxiety, intercultural uncertainty, coping strategy, and cultural competence. Participants were 275 nurses working in general hospitals in Seoul and Kyung-Gi Do, Korea. Each nurse in this study had experience of caring for over 10 foreign patients. Data were collected using a structured questionnaire and analyzed with SPSS statistical software with the added AMOS module. The overall fitness indices of the hypothetical model were a good fit. Multicultural experience, ethnocentric attitude, organizational cultural competence support, and intercultural uncertainty were found to have a direct and indirect effect on the cultural competence of nurses while coping strategy only had a direct effect. Intercultural anxiety did not have a significant effect on cultural competence. This model explained 59.1% of the variance in the nurses' cultural competence when caring for foreign patients. Nurses' cultural competence can be developed by offering multicultural nursing education, increasing direct/indirect multicultural experience, and sharing problem-solving experience to promote the coping ability of nurses. Organizational support can be achieved by preparing relevant personnel and resources. Subsequently, the quality of nursing care for foreign patients' will be ultimately improved. Copyright © 2017. Published by Elsevier B.V.

  20. Impacts into quartz sand: Crater formation, shock metamorphism, and ejecta distribution in laboratory experiments and numerical models

    NASA Astrophysics Data System (ADS)

    Wünnemann, Kai; Zhu, Meng-Hua; Stöffler, Dieter

    2016-10-01

    We investigated the ejection mechanics by a complementary approach of cratering experiments, including the microscopic analysis of material sampled from these experiments, and 2-D numerical modeling of vertical impacts. The study is based on cratering experiments in quartz sand targets performed at the NASA Ames Vertical Gun Range. In these experiments, the preimpact location in the target and the final position of ejecta was determined by using color-coded sand and a catcher system for the ejecta. The results were compared with numerical simulations of the cratering and ejection process to validate the iSALE shock physics code. In turn the models provide further details on the ejection velocities and angles. We quantify the general assumption that ejecta thickness decreases with distance according to a power-law and that the relative proportion of shocked material in the ejecta increase with distance. We distinguish three types of shock metamorphic particles (1) melt particles, (2) shock lithified aggregates, and (3) shock-comminuted grains. The agreement between experiment and model was excellent, which provides confidence that the models can predict ejection angles, velocities, and the degree of shock loading of material expelled from a crater accurately if impact parameters such as impact velocity, impactor size, and gravity are varied beyond the experimental limitations. This study is relevant for a quantitative assessment of impact gardening on planetary surfaces and the evolution of regolith layers on atmosphereless bodies.

  1. Hohlraum modeling for opacity experiments on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Dodd, E. S.; DeVolder, B. G.; Martin, M. E.; Krasheninnikova, N. S.; Tregillis, I. L.; Perry, T. S.; Heeter, R. F.; Opachich, Y. P.; Moore, A. S.; Kline, J. L.; Johns, H. M.; Liedahl, D. A.; Cardenas, T.; Olson, R. E.; Wilde, B. H.; Urbatsch, T. J.

    2018-06-01

    This paper discusses the modeling of experiments that measure iron opacity in local thermodynamic equilibrium (LTE) using laser-driven hohlraums at the National Ignition Facility (NIF). A previous set of experiments fielded at Sandia's Z facility [Bailey et al., Nature 517, 56 (2015)] have shown up to factors of two discrepancies between the theory and experiment, casting doubt on the validity of the opacity models. The purpose of the new experiments is to make corroborating measurements at the same densities and temperatures, with the initial measurements made at a temperature of 160 eV and an electron density of 0.7 × 1022 cm-3. The X-ray hot spots of a laser-driven hohlraum are not in LTE, and the iron must be shielded from a direct line-of-sight to obtain the data [Perry et al., Phys. Rev. B 54, 5617 (1996)]. This shielding is provided either with the internal structure (e.g., baffles) or external wall shapes that divide the hohlraum into a laser-heated portion and an LTE portion. In contrast, most inertial confinement fusion hohlraums are simple cylinders lacking complex gold walls, and the design codes are not typically applied to targets like those for the opacity experiments. We will discuss the initial basis for the modeling using LASNEX, and the subsequent modeling of five different hohlraum geometries that have been fielded on the NIF to date. This includes a comparison of calculated and measured radiation temperatures.

  2. Development of probabilistic regional climate scenario in East Asia

    NASA Astrophysics Data System (ADS)

    Dairaku, K.; Ueno, G.; Ishizaki, N. N.

    2015-12-01

    Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. In order to develop probabilistic regional climate information that represents the uncertainty in climate scenario experiments in East Asia (CORDEX-EA and Japan), the probability distribution of 2m air temperature was estimated by using developed regression model. The method can be easily applicable to other regions and other physical quantities, and also to downscale to finer-scale dependent on availability of observation dataset. Probabilistic climate information in present (1969-1998) and future (2069-2098) climate was developed using CMIP3 SRES A1b scenarios 21 models and the observation data (CRU_TS3.22 & University of Delaware in CORDEX-EA, NIAES AMeDAS mesh data in Japan). The prototype of probabilistic information in CORDEX-EA and Japan represent the quantified structural uncertainties of multi-model ensemble experiments of climate change scenarios. Appropriate combination of statistical methods and optimization of climate ensemble experiments using multi-General Circulation Models (GCMs) and multi-regional climate models (RCMs) ensemble downscaling experiments are investigated.

  3. Capsule modeling of high foot implosion experiments on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, D. S.; Kritcher, A. L.; Milovich, J. L.

    This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less

  4. Shock, release and reshock of PBX 9502: experiments and modeling

    NASA Astrophysics Data System (ADS)

    Aslam, Tariq; Gustavsen, Richard; Whitworh, Nicholas; Menikoff, Ralph; Tarver, Craig; Handley, Caroline; Bartram, Brian

    2017-06-01

    We examine shock, release and reshock into the tri-amino-tri-nitro-benzene (TATB) based explosive PBX 9502 (95% TATB, 5% Kel-F 800) from both an experimental and modeling point of view. The experiments are performed on the 2-stage light gas gun at Los Alamos National Laboratory and are composed of a multi-layered impactor impinging on PBX 9502 backed by a polymethylmethacrylate window. The objective is to initially shock the PBX 9502 in the 7 GPa range (too weak to start significant reaction), then allow a rarefaction fan to release the material to a lower pressure/temperature state. Following this release, a strong second shock will recompress the PBX. If the rarefaction fan releases the PBX to a very low pressure, the ensuing second shock can increase the entropy and temperature substantially more than in previous double-shock experiments without an intermediate release. Predictions from a variety of reactive burn models (AWSD, CREST, Ignition and Growth, SURF) demonstrate significantly different behaviors and thus the experiments are an excellent validation test of the models, and may suggest improvements for subsequent modeling efforts.

  5. Modelling the effect of shear strength on isentropic compression experiments

    NASA Astrophysics Data System (ADS)

    Thomson, Stuart; Howell, Peter; Ockendon, John; Ockendon, Hilary

    2017-01-01

    Isentropic compression experiments (ICE) are a way of obtaining equation of state information for metals undergoing violent plastic deformation. In a typical experiment, millimetre thick metal samples are subjected to pressures on the order of 10 - 102 GPa, while the yield strength of the material can be as low as 10-2 GPa. The analysis of such experiments has so far neglected the effect of shear strength, instead treating the highly plasticised metal as an inviscid compressible fluid. However making this approximation belies the basic elastic nature of a solid object. A more accurate method should strive to incorporate the small but measurable effects of shear strength. Here we present a one-dimensional mathematical model for elastoplasticity at high stress which allows for both compressibility and the shear strength of the material. In the limit of zero yield stress this model reproduces the hydrodynamic models currently used to analyse ICEs. Numerical solutions of the governing equations will then be presented for problems relevant to ICEs in order to investigate the effects of shear strength compared with a model based purely on hydrodynamics.

  6. POD experiments using real and simulated time-sharing observations for GEO satellites in C-band transfer ranging system

    NASA Astrophysics Data System (ADS)

    Fen, Cao; XuHai, Yang; ZhiGang, Li; ChuGang, Feng

    2016-08-01

    The normal consecutive observing model in Chinese Area Positioning System (CAPS) can only supply observations of one GEO satellite in 1 day from one station. However, this can't satisfy the project need for observing many GEO satellites in 1 day. In order to obtain observations of several GEO satellites in 1 day like GPS/GLONASS/Galileo/BeiDou, the time-sharing observing model for GEO satellites in CAPS needs research. The principle of time-sharing observing model is illuminated with subsequent Precise Orbit Determination (POD) experiments using simulated time-sharing observations in 2005 and the real time-sharing observations in 2015. From time-sharing simulation experiments before 2014, the time-sharing observing 6 GEO satellites every 2 h has nearly the same orbit precision with the consecutive observing model. From POD experiments using the real time-sharing observations, POD precision for ZX12# and Yatai7# are about 3.234 m and 2.570 m, respectively, which indicates the time-sharing observing model is appropriate for CBTR system and can realize observing many GEO satellites in 1 day.

  7. Vehicular sources in acoustic propagation experiments

    NASA Technical Reports Server (NTRS)

    Prado, Gervasio; Fitzgerald, James; Arruda, Anthony; Parides, George

    1990-01-01

    One of the most important uses of acoustic propagation models lies in the area of detection and tracking of vehicles. Propagation models are used to compute transmission losses in performance prediction models and to analyze the results of past experiments. Vehicles can also provide the means for cost effective experiments to measure acoustic propagation conditions over significant ranges. In order to properly correlate the information provided by the experimental data and the propagation models, the following issues must be taken into consideration: the phenomenology of the vehicle noise sources must be understood and characterized; the vehicle's location or 'ground truth' must be accurately reproduced and synchronized with the acoustic data; and sufficient meteorological data must be collected to support the requirements of the propagation models. The experimental procedures and instrumentation needed to carry out propagation experiments are discussed. Illustrative results are presented for two cases. First, a helicopter was used to measure propagation losses at a range of 1 to 10 Km. Second, a heavy diesel-powered vehicle was used to measure propagation losses in the 300 to 2200 m range.

  8. Vehicular sources in acoustic propagation experiments

    NASA Astrophysics Data System (ADS)

    Prado, Gervasio; Fitzgerald, James; Arruda, Anthony; Parides, George

    1990-12-01

    One of the most important uses of acoustic propagation models lies in the area of detection and tracking of vehicles. Propagation models are used to compute transmission losses in performance prediction models and to analyze the results of past experiments. Vehicles can also provide the means for cost effective experiments to measure acoustic propagation conditions over significant ranges. In order to properly correlate the information provided by the experimental data and the propagation models, the following issues must be taken into consideration: the phenomenology of the vehicle noise sources must be understood and characterized; the vehicle's location or 'ground truth' must be accurately reproduced and synchronized with the acoustic data; and sufficient meteorological data must be collected to support the requirements of the propagation models. The experimental procedures and instrumentation needed to carry out propagation experiments are discussed. Illustrative results are presented for two cases. First, a helicopter was used to measure propagation losses at a range of 1 to 10 Km. Second, a heavy diesel-powered vehicle was used to measure propagation losses in the 300 to 2200 m range.

  9. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  10. Bayesian model calibration of ramp compression experiments on Z

    NASA Astrophysics Data System (ADS)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Modeling of Non-Homogeneous Containment Atmosphere in the ThAI Experimental Facility Using a CFD Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babic, Miroslav; Kljenak, Ivo; Mavko, Borut

    2006-07-01

    The CFD code CFX4.4 was used to simulate an experiment in the ThAI facility, which was designed for investigation of thermal-hydraulic processes during a severe accident inside a Light Water Reactor containment. In the considered experiment, air was initially present in the vessel, and helium and steam were injected during different phases of the experiment at various mass flow rates and at different locations. The main purpose of the simulation was to reproduce the non-homogeneous temperature and species concentration distributions in the ThAI experimental facility. A three-dimensional model of the ThAI vessel for the CFX4.4 code was developed. The flowmore » in the simulation domain was modeled as single-phase. Steam condensation on vessel walls was modeled as a sink of mass and energy using a correlation that was originally developed for an integral approach. A simple model of bulk phase change was also introduced. The calculated time-dependent variables together with temperature and concentration distributions at the end of experiment phases are compared to experimental results. (authors)« less

  12. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    NASA Astrophysics Data System (ADS)

    Volk, Brent L.; Lagoudas, Dimitris C.; Maitland, Duncan J.

    2011-09-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5-4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data.

  13. Capsule modeling of high foot implosion experiments on the National Ignition Facility

    DOE PAGES

    Clark, D. S.; Kritcher, A. L.; Milovich, J. L.; ...

    2017-03-21

    This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less

  14. Comparison of retention models for polymers 1. Poly(ethylene glycol)s.

    PubMed

    Bashir, Mubasher A; Radke, Wolfgang

    2006-10-27

    The suitability of three different retention models to predict the retention times of poly(ethylene glycol)s (PEGs) in gradient and isocratic chromatography was investigated. The models investigated were the linear (LSSM) and the quadratic solvent strength model (QSSM). In addition, a model describing the retention behaviour of polymers was extended to account for gradient elution (PM). It was found that all models are suited to properly predict gradient retention volumes provided the extraction of the analyte specific parameters is performed from gradient experiments as well. The LSSM and QSSM on principle cannot describe retention behaviour under critical or SEC conditions. Since the PM is designed to cover all three modes of polymer chromatography, it is therefore superior to the other models. However, the determination of the analyte specific parameters, which are needed to calibrate the retention behaviour, strongly depend on the suitable selection of initial experiments. A useful strategy for a purposeful selection of these calibration experiments is proposed.

  15. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    DOE PAGES

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; ...

    2016-06-14

    In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less

  16. Modeling to predict pilot performance during CDTI-based in-trail following experiments

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Goka, T.

    1984-01-01

    A mathematical model was developed of the flight system with the pilot using a cockpit display of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. Both in-trail and vertical dynamics were included. The nominal spacing was based on one of three criteria (Constant Time Predictor; Constant Time Delay; or Acceleration Cue). This model was used to simulate digitally the dynamics of a string of multiple following aircraft, including response to initial position errors. The simulation was used to predict the outcome of a series of in-trail following experiments, including pilot performance in maintaining correct longitudinal spacing and vertical position. The experiments were run in the NASA Ames Research Center multi-cab cockpit simulator facility. The experimental results were then used to evaluate the model and its prediction accuracy. Model parameters were adjusted, so that modeled performance matched experimental results. Lessons learned in this modeling and prediction study are summarized.

  17. Empathy and child neglect: a theoretical model.

    PubMed

    De Paul, Joaquín; Guibert, María

    2008-11-01

    To present an explanatory theory-based model of child neglect. This model does not address neglectful behaviors of parents with mental retardation, alcohol or drug abuse, or severe mental health problems. In this model parental behavior aimed to satisfy a child's need is considered a helping behavior and, as a consequence, child neglect is considered as a specific type of non-helping behavior. The central hypothesis of the theoretical model presented here suggests that neglectful parents cannot develop the helping response set to care for their children because the observation of a child's signal of need does not lead to the experience of emotions that motivate helping or because the parents experience these emotions, but specific cognitions modify the motivation to help. The present theoretical model suggests that different typologies of neglectful parents could be developed based on different reasons that parents might not to experience emotions that motivate helping behaviors. The model can be helpful to promote new empirical studies about the etiology of different groups of neglectful families.

  18. Comparison of simulator fidelity model predictions with in-simulator evaluation data

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.

    1983-01-01

    A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.

  19. Genetic algorithm learning in a New Keynesian macroeconomic setup.

    PubMed

    Hommes, Cars; Makarewicz, Tomasz; Massaro, Domenico; Smits, Tom

    2017-01-01

    In order to understand heterogeneous behavior amongst agents, empirical data from Learning-to-Forecast (LtF) experiments can be used to construct learning models. This paper follows up on Assenza et al. (2013) by using a Genetic Algorithms (GA) model to replicate the results from their LtF experiment. In this GA model, individuals optimize an adaptive, a trend following and an anchor coefficient in a population of general prediction heuristics. We replicate experimental treatments in a New-Keynesian environment with increasing complexity and use Monte Carlo simulations to investigate how well the model explains the experimental data. We find that the evolutionary learning model is able to replicate the three different types of behavior, i.e. convergence to steady state, stable oscillations and dampened oscillations in the treatments using one GA model. Heterogeneous behavior can thus be explained by an adaptive, anchor and trend extrapolating component and the GA model can be used to explain heterogeneous behavior in LtF experiments with different types of complexity.

  20. Virtual parameter-estimation experiments in Bioprocess-Engineering education.

    PubMed

    Sessink, Olivier D T; Beeftink, Hendrik H; Hartog, Rob J M; Tramper, Johannes

    2006-05-01

    Cell growth kinetics and reactor concepts constitute essential knowledge for Bioprocess-Engineering students. Traditional learning of these concepts is supported by lectures, tutorials, and practicals: ICT offers opportunities for improvement. A virtual-experiment environment was developed that supports both model-related and experimenting-related learning objectives. Students have to design experiments to estimate model parameters: they choose initial conditions and 'measure' output variables. The results contain experimental error, which is an important constraint for experimental design. Students learn from these results and use the new knowledge to re-design their experiment. Within a couple of hours, students design and run many experiments that would take weeks in reality. Usage was evaluated in two courses with questionnaires and in the final exam. The faculties involved in the two courses are convinced that the experiment environment supports essential learning objectives well.

  1. Probing eukaryotic cell mechanics via mesoscopic simulations

    PubMed Central

    Shang, Menglin; Lim, Chwee Teck

    2017-01-01

    Cell mechanics has proven to be important in many biological processes. Although there is a number of experimental techniques which allow us to study mechanical properties of cell, there is still a lack of understanding of the role each sub-cellular component plays during cell deformations. We present a new mesoscopic particle-based eukaryotic cell model which explicitly describes cell membrane, nucleus and cytoskeleton. We employ Dissipative Particle Dynamics (DPD) method that provides us with the unified framework for modeling of a cell and its interactions in the flow. Data from micropipette aspiration experiments were used to define model parameters. The model was validated using data from microfluidic experiments. The validated model was then applied to study the impact of the sub-cellular components on the cell viscoelastic response in micropipette aspiration and microfluidic experiments. PMID:28922399

  2. Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops

    NASA Astrophysics Data System (ADS)

    Rahman, Aminur; Jordan, Ian; Blackmore, Denis

    2018-01-01

    It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.

  3. Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.

    PubMed

    Rahman, Aminur; Jordan, Ian; Blackmore, Denis

    2018-01-01

    It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.

  4. Numerical simulation of experiments in the Giant Planet Facility

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Davy, W. C.

    1979-01-01

    Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.

  5. A Comparison of Metamodeling Techniques via Numerical Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2016-01-01

    This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.

  6. Experiments in Sound and Structural Vibrations Using an Air-Analog Model Ducted Propulsion System

    DTIC Science & Technology

    2007-08-01

    Department of Aerospace S~and Mechanical Engineering I 20070904056 I EXPERIMENTS IN SOUND AND STRUCTURAL VIBRATIONS USING AN AIR -ANALOG MODEL DUCTED...SOUND AND STRUCTURAL * VIBRATIONS USING AN AIR -ANALOG MODEL DUCTED PROPULSION SYSTEM FINAL TECHNICAL REPORT Prepared by: Scott C. Morris Assistant...Vibration Using Air - 5b. GRANT NUMBER Analog Model Ducted Propulsion Systems N00014-1-0522 5C. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER

  7. Design of experiments for identification of complex biochemical systems with applications to mitochondrial bioenergetics.

    PubMed

    Vinnakota, Kalyan C; Beard, Daniel A; Dash, Ranjan K

    2009-01-01

    Identification of a complex biochemical system model requires appropriate experimental data. Models constructed on the basis of data from the literature often contain parameters that are not identifiable with high sensitivity and therefore require additional experimental data to identify those parameters. Here we report the application of a local sensitivity analysis to design experiments that will improve the identifiability of previously unidentifiable model parameters in a model of mitochondrial oxidative phosphorylation and tricaboxylic acid cycle. Experiments were designed based on measurable biochemical reactants in a dilute suspension of purified cardiac mitochondria with experimentally feasible perturbations to this system. Experimental perturbations and variables yielding the most number of parameters above a 5% sensitivity level are presented and discussed.

  8. Aerogel Algorithm for Shrapnel Penetration Experiments

    NASA Astrophysics Data System (ADS)

    Tokheim, R. E.; Erlich, D. C.; Curran, D. R.; Tobin, M.; Eder, D.

    2004-07-01

    To aid in assessing shrapnel produced by laser-irradiated targets, we have performed shrapnel collection "BB gun" experiments in aerogel and have developed a simple analytical model for deceleration of the shrapnel particles in the aerogel. The model is similar in approach to that of Anderson and Ahrens (J. Geophys. Res., 99 El, 2063-2071, Jan. 1994) and accounts for drag, aerogel compaction heating, and the velocity threshold for shrapnel ablation due to conductive heating. Model predictions are correlated with the BB gun results at impact velocities up to a few hundred m/s and with NASA data for impact velocities up to 6 km/s. The model shows promising agreement with the data and will be used to plan and interpret future experiments.

  9. Flow-induced Flutter of Heart Valves: Experiments with Canonical Models

    NASA Astrophysics Data System (ADS)

    Dou, Zhongwang; Seo, Jung-Hee; Mittal, Rajat

    2017-11-01

    For the better understanding of hemodynamics associated with valvular function in health and disease, the flow-induced flutter of heart valve leaflets is studied using benchtop experiments with canonical valve models. A simple experimental model with flexible leaflets is constructed and a pulsatile pump drives the flow through the leaflets. We quantify the leaflet dynamics using digital image analysis and also characterize the dynamics of the flow around the leaflets using particle imaging velocimetry. Experiments are conducted over a wide range of flow and leaflet parameters and data curated for use as a benchmark for validation of computational fluid-structure interaction models. The authors would like to acknowledge Supported from NSF Grants IIS-1344772, CBET-1511200 and NSF XSEDE Grant TG-CTS100002.

  10. Electromagnetic sunscreen model: design of experiments on particle specifications.

    PubMed

    Lécureux, Marie; Deumié, Carole; Enoch, Stefan; Sergent, Michelle

    2015-10-01

    We report a numerical study on sunscreen design and optimization. Thanks to the combined use of electromagnetic modeling and design of experiments, we are able to screen the most relevant parameters of mineral filters and to optimize sunscreens. Several electromagnetic modeling methods are used depending on the type of particles, density of particles, etc. Both the sun protection factor (SPF) and the UVB/UVA ratio are considered. We show that the design of experiments' model should include interactions between materials and other parameters. We conclude that the material of the particles is a key parameter for the SPF and the UVB/UVA ratio. Among the materials considered, none is optimal for both. The SPF is also highly dependent on the size of the particles.

  11. Bayesian analysis of non-linear differential equation models with application to a gut microbial ecosystem.

    PubMed

    Lawson, Daniel J; Holtrop, Grietje; Flint, Harry

    2011-07-01

    Process models specified by non-linear dynamic differential equations contain many parameters, which often must be inferred from a limited amount of data. We discuss a hierarchical Bayesian approach combining data from multiple related experiments in a meaningful way, which permits more powerful inference than treating each experiment as independent. The approach is illustrated with a simulation study and example data from experiments replicating the aspects of the human gut microbial ecosystem. A predictive model is obtained that contains prediction uncertainty caused by uncertainty in the parameters, and we extend the model to capture situations of interest that cannot easily be studied experimentally. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A predictive model for the tokamak density limit

    DOE PAGES

    Teng, Q.; Brennan, D. P.; Delgado-Aparicio, L.; ...

    2016-07-28

    We reproduce the Greenwald density limit, in all tokamak experiments by using a phenomenologically correct model with parameters in the range of experiments. A simple model of equilibrium evolution and local power balance inside the island has been implemented to calculate the radiation-driven thermo-resistive tearing mode growth and explain the density limit. Strong destabilization of the tearing mode due to an imbalance of local Ohmic heating and radiative cooling in the island predicts the density limit within a few percent. Furthermore, we found the density limit and it is a local edge limit and weakly dependent on impurity densities. Ourmore » results are robust to a substantial variation in model parameters within the range of experiments.« less

  13. Can training improve the quality of inferences made by raters in competency modeling? A quasi-experiment.

    PubMed

    Lievens, Filip; Sanchez, Juan I

    2007-05-01

    A quasi-experiment was conducted to investigate the effects of frame-of-reference training on the quality of competency modeling ratings made by consultants. Human resources consultants from a large consulting firm were randomly assigned to either a training or a control condition. The discriminant validity, interrater reliability, and accuracy of the competency ratings were significantly higher in the training group than in the control group. Further, the discriminant validity and interrater reliability of competency inferences were highest among an additional group of trained consultants who also had competency modeling experience. Together, these results suggest that procedural interventions such as rater training can significantly enhance the quality of competency modeling. 2007 APA, all rights reserved

  14. Non-integer viscoelastic constitutive law to model soft biological tissues to in-vivo indentation.

    PubMed

    Demirci, Nagehan; Tönük, Ergin

    2014-01-01

    During the last decades, derivatives and integrals of non-integer orders are being more commonly used for the description of constitutive behavior of various viscoelastic materials including soft biological tissues. Compared to integer order constitutive relations, non-integer order viscoelastic material models of soft biological tissues are capable of capturing a wider range of viscoelastic behavior obtained from experiments. Although integer order models may yield comparably accurate results, non-integer order material models have less number of parameters to be identified in addition to description of an intermediate material that can monotonically and continuously be adjusted in between an ideal elastic solid and an ideal viscous fluid. In this work, starting with some preliminaries on non-integer (fractional) calculus, the "spring-pot", (intermediate mechanical element between a solid and a fluid), non-integer order three element (Zener) solid model, finally a user-defined large strain non-integer order viscoelastic constitutive model was constructed to be used in finite element simulations. Using the constitutive equation developed, by utilizing inverse finite element method and in vivo indentation experiments, soft tissue material identification was performed. The results indicate that material coefficients obtained from relaxation experiments, when optimized with creep experimental data could simulate relaxation, creep and cyclic loading and unloading experiments accurately. Non-integer calculus viscoelastic constitutive models, having physical interpretation and modeling experimental data accurately is a good alternative to classical phenomenological viscoelastic constitutive equations.

  15. Discovering Mendeleev's Model.

    ERIC Educational Resources Information Center

    Sterling, Donna

    1996-01-01

    Presents an activity that introduces the historical developments in science that led to the discovery of the periodic table and lets students experience scientific discovery firsthand. Enables students to learn about patterns among the elements and experience how scientists analyze data to discover patterns and build models. (JRH)

  16. Jack Rabbit Pretest Data For TATB Based IHE Model Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, M M; Strand, O T; Bosson, S T

    The Jack Rabbit Pretest series consisted of 5 focused hydrodynamic experiments, 2021E PT3, PT4, PT5, PT6, and PT7. They were fired in March and April of 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory, Livermore, California. These experiments measured dead-zone formation and impulse gradients created during the detonation of TATB based insensitive high explosive. This document contains reference data tables for all 5 experiments. These data tables include: (1) Measured laser velocimetry of the experiment diagnostic plate (2) Computed diagnostic plate profile contours through velocity integration (3) Computed center axis pressures through velocity differentiation. All timesmore » are in microseconds, referenced from detonator circuit current start. All dimensions are in millimeters. Schematic axi-symmetric cross sections are shown for each experiment. These schematics detail the materials used and dimensions of the experiment and component parts. This should allow anyone wanting to evaluate their TATB based insensitive high explosive detonation model against experiment. These data are particularly relevant in examining reactive flow detonation model prediction in computational simulation of dead-zone formation and resulting impulse gradients produced by detonating TATB based explosive.« less

  17. Assessment of the simulation of Indian Ocean Dipole in the CESM—Impacts of atmospheric physics and model resolution

    NASA Astrophysics Data System (ADS)

    Yao, Zhixiong; Tang, Youmin; Chen, Dake; Zhou, Lei; Li, Xiaojing; Lian, Tao; Ul Islam, Siraj

    2016-12-01

    This study examines the possible impacts of coupling processes on simulations of the Indian Ocean Dipole (IOD). Emphasis is placed on the atmospheric model resolution and physics. Five experiments were conducted for this purpose, including one control run of the ocean-only model, four coupled experiments using two different versions of the Community Atmosphere Model (CAM4 and CAM5) and two different resolutions. The results show that the control run could effectively simulate various features of the IOD. The coupled experiments run at the higher resolution yielded more realistic IOD period and intensity than their counterparts at the low resolution. The coupled experiments using CAM5 generally showed a better simulation skill in the tropical Indian SST climatology and phase-locking than those using CAM4, but the wind anomalies were stronger and the IOD period were longer in the former experiments than in the latter. In all coupled experiments, the IOD intensity was much stronger than the observed intensity, which is attributable to wind-thermocline depth feedback and thermocline depth-subsurface temperature feedback. The CAM5 physics seems beneficial for the simulation of summer rainfall over the eastern equatorial Indian Ocean and the CAM4 physics tends to produce less biases over the western equatorial Indian Ocean, whereas the higher resolution tends to generate unrealistically strong meridional winds. The IOD-ENSO relationship was captured reasonably well in coupled experiments, with improvements in CAM5 relative to CAM4. However, the teleconnection of the IOD-Indian summer monsoon and ENSO-Indian summer monsoon was not realistically simulated in all experiments.

  18. An Evaluation of the FLAG Friction Model frictmultiscale2 using the Experiments of Juanicotena and Szarynski

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zocher, Marvin Anthony; Hammerberg, James Edward

    The experiments of Juanicotena and Szarynski, namely T101, T102, and T105 are modeled for purposes of gaining a better understanding of the FLAG friction model frictmultiscale2. This exercise has been conducted as a first step toward model validation. It is shown that with inclusion of the friction model in the numerical analysis, the results of Juanicotena and Szarynski are predicted reasonably well. Without the friction model, simulation results do not match the experimental data nearly as well. Suggestions for follow-on work are included.

  19. Uranium Hydride Nucleation and Growth Model FY'16 ESC Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, Mary Ann; Richards, Andrew Walter; Holby, Edward F.

    2016-12-20

    Uranium hydride corrosion is of great interest to the nuclear industry. Uranium reacts with water and/or hydrogen to form uranium hydride which adversely affects material performance. Hydride nucleation is influenced by thermal history, mechanical defects, oxide thickness, and chemical defects. Information has been gathered from past hydride experiments to formulate a uranium hydride model to be used in a Canned Subassembly (CSA) lifetime prediction model. This multi-scale computer modeling effort started in FY’13, and the fourth generation model is now complete. Additional high-resolution experiments will be run to further test the model.

  20. Analytic Modeling of Pressurization and Cryogenic Propellant Conditions for Lunar Landing Vehicle

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy

    2010-01-01

    This slide presentation reviews the development, validation and application of the model to the Lunar Landing Vehicle. The model named, Computational Propellant and Pressurization Program -- One Dimensional (CPPPO), is used to model in this case cryogenic propellant conditions of the Altair Lunar lander. The validation of CPPPO was accomplished via comparison to an existing analytic model (i.e., ROCETS), flight experiment and ground experiments. The model was used to the Lunar Landing Vehicle perform a parametric analysis on pressurant conditions and to examine the results of unequal tank pressurization and draining for multiple tank designs.

Top