Sample records for obe model

  1. Hydrologic data for the Obed River watershed, Tennessee

    USGS Publications Warehouse

    Knight, Rodney R.; Wolfe, William J.; Law, George S.

    2014-01-01

    The Obed River watershed drains a 520-square-mile area of the Cumberland Plateau physiographic region in the Tennessee River basin. The watershed is underlain by conglomerate, sandstone, and shale of Pennsylvanian age, which overlie Mississippian-age limestone. The larger creeks and rivers of the Obed River system have eroded gorges through the conglomerate and sandstone into the deeper shale. The largest gorges are up to 400 feet deep and are protected by the Wild and Scenic Rivers Act as part of the Obed Wild and Scenic River, which is managed by the National Park Service. The growing communities of Crossville and Crab Orchard, Tennessee, are located upstream of the gorge areas of the Obed River watershed. The cities used about 5.8 million gallons of water per day for drinking water in 2010 from Lake Holiday and Stone Lake in the Obed River watershed and Meadow Park Lake in the Caney Fork River watershed. The city of Crossville operates a wastewater treatment plant that releases an annual average of about 2.2 million gallons per day of treated effluent to the Obed River, representing as much as 10 to 40 percent of the monthly average streamflow of the Obed River near Lancing about 35 miles downstream, during summer and fall. During the past 50 years (1960–2010), several dozen tributary impoundments and more than 2,000 small farm ponds have been constructed in the Obed River watershed. Synoptic streamflow measurements indicate a tendency towards dampened high flows and slightly increased low flows as the percentage of basin area controlled by impoundments increases.

  2. Alluvial Bars of the Obed Wild and Scenic River, Tennessee

    USGS Publications Warehouse

    Wolfe, W.J.; Fitch, K.C.; Ladd, D.E.

    2007-01-01

    In 2004, the U.S. Geological Survey (USGS) and the National Park Service (NPS) initiated a reconnaissance study of alluvial bars along the Obed Wild and Scenic River (Obed WSR), in Cumberland and Morgan Counties, Tennessee. The study was partly driven by concern that trapping of sand by upstream impoundments might threaten rare, threatened, or endangered plant habitat by reducing the supply of sediment to the alluvial bars. The objectives of the study were to: (1) develop a preliminary understanding of the distribution, morphology, composition, stability, and vegetation structure of alluvial bars along the Obed WSR, and (2) determine whether evidence of human alteration of sediment dynamics in the Obed WSR warrants further, more detailed examination. This report presents the results of the reconnaissance study of alluvial bars along the Obed River, Clear Creek, and Daddys Creek in the Obed WSR. The report is based on: (1) field-reconnaissance visits by boat to 56 alluvial bars along selected reaches of the Obed River and Clear Creek; (2) analysis of aerial photographs, topographic and geologic maps, and other geographic data to assess the distribution of alluvial bars in the Obed WSR; (3) surveys of topography, surface particle size, vegetation structure, and ground cover on three selected alluvial bars; and (4) analysis of hydrologic records.

  3. Traditionalist Christians and OBE: What's the Problem?

    ERIC Educational Resources Information Center

    Burron, Arnold

    1994-01-01

    Traditionalist Christians are concerned about OBE's affective objectives and believe that schools indoctrinate children with undesirable social, political, and economic values. Environmentalism, globalism, and multiculturalism are supplanting ideas about prudent resource utilization, patriotism, and America the melting pot. Schools should offer…

  4. OBE EAP-EOP Model: A Proposed Instructional Design in English for Specific Purposes

    ERIC Educational Resources Information Center

    Hernandez, Hjalmar Punla

    2016-01-01

    Outcome-Based Education (OBE) demands innovative Instructional Designs (ID) in the 21st century. Being a descriptive-qualitative research, this paper aimed to (1) identify the ID used in the English language curricula of a private tertiary level institution in the Southern Luzon, Philippines, (2) determined the elements that the ID of the English…

  5. Pulsed-field ionization zero electron kinetic energy spectrum of the ground electronic state of BeOBe+.

    PubMed

    Antonov, Ivan O; Barker, Beau J; Heaven, Michael C

    2011-01-28

    The ground electronic state of BeOBe(+) was probed using the pulsed-field ionization zero electron kinetic energy photoelectron technique. Spectra were rotationally resolved and transitions to the zero-point level, the symmetric stretch fundamental and first two bending vibrational levels were observed. The rotational state symmetry selection rules confirm that the ground electronic state of the cation is (2)Σ(g)(+). Detachment of an electron from the HOMO of neutral BeOBe results in little change in the vibrational or rotational constants, indicating that this orbital is nonbonding in nature. The ionization energy of BeOBe [65480(4) cm(-1)] was refined over previous measurements. Results from recent theoretical calculations for BeOBe(+) (multireference configuration interaction) were found to be in good agreement with the experimental data.

  6. Why Deming and OBE Don't Mix.

    ERIC Educational Resources Information Center

    Holt, Maurice

    1995-01-01

    The central idea in W. Edwards Deming's approach to quality management is the need to improve process. Outcome-based education's central defect is its failure to address process. Deming would reject OBE along with management-by-objectives. Education is not a product defined by specific output measures, but a process to develop the mind. (MLH)

  7. Exploring in teaching mode of Optical Fiber Sensing Technology outcomes-based education (OBE)

    NASA Astrophysics Data System (ADS)

    Fu, Guangwei; Fu, Xinghu; Zhang, Baojun; Bi, Weihong

    2017-08-01

    Combining with the characteristics of disciplines and OBE mode, also aiming at the phenomena of low learning enthusiasm for the major required courses for senior students, the course of optical fiber sensing was chosen as the demonstration for the teaching mode reform. In the light of "theory as the base, focus on the application, highlighting the practice" principle, we emphasis on the introduction of the latest scientific research achievements and current development trends, highlight the practicability and practicality. By observation learning and course project, enables students to carry out innovative project design and implementation means related to the practical problems in science and engineering of this course.

  8. An allocation of undiscovered oil and gas resources to Big South Fork National Recreation Area and Obed Wild and Scenic River, Kentucky and Tennessee

    USGS Publications Warehouse

    Schenk, Christopher J.; Klett, Timothy R.; Charpentier, Ronald R.; Cook, Troy A.; Pollastro, Richard M.

    2006-01-01

    The U.S. Geological Survey (USGS) estimated volumes of undiscovered oil and gas resources that may underlie Big South Fork National Recreation Area and Obed Wild and Scenic River in Kentucky and Tennessee. Applying the results of existing assessments of undiscovered resources from three assessment units in the Appalachian Basin Province and three plays in the Cincinnati Arch Province that include these land parcels, the USGS allocated approximately (1) 16 billion cubic feet of gas, 15 thousand barrels of oil, and 232 thousand barrels of natural gas liquids to Big South Fork National Recreation Area; and (2) 0.5 billion cubic feet of gas, 0.6 thousand barrels of oil, and 10 thousand barrels of natural gas liquids to Obed Wild and Scenic River. These estimated volumes of undiscovered resources represent potential volumes in new undiscovered fields, but do not include potential additions to reserves within existing fields.

  9. Relativistic proton-nucleus scattering and one-boson-exchange models

    NASA Technical Reports Server (NTRS)

    Maung, Khin Maung; Gross, Franz; Tjon, J. A.; Townsend, L. W.; Wallace, S. J.

    1993-01-01

    Relativistic p-(Ca-40) elastic scattering observables are calculated using four sets of relativistic NN amplitudes obtained from different one-boson-exchange (OBE) models. The first two sets are based upon a relativistic equation in which one particle is on mass shell and the other two sets are obtained from a quasipotential reduction of the Bethe-Salpeter equation. Results at 200, 300, and 500 MeV are presented for these amplitudes. Differences between the predictions of these models provide a study of the uncertainty in constructing Dirac optical potentials from OBE-based NN amplitudes.

  10. Modeling energy expenditure in children and adolescents using quantile regression

    USDA-ARS?s Scientific Manuscript database

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obes...

  11. Initial environmental impacts of the Obed Mountain coal mine process water spill into the Athabasca River (Alberta, Canada).

    PubMed

    Cooke, Colin A; Schwindt, Colin; Davies, Martin; Donahue, William F; Azim, Ekram

    2016-07-01

    On October 31, 2013, a catastrophic release of approximately 670,000m(3) of coal process water occurred as the result of the failure of the wall of a post-processing settling pond at the Obed Mountain Mine near Hinton, Alberta. A highly turbid plume entered the Athabasca River approximately 20km from the mine, markedly altering the chemical composition of the Athabasca River as it flowed downstream. The released plume traveled approximately 1100km downstream to the Peace-Athabasca Delta in approximately four weeks, and was tracked both visually and using real-time measures of river water turbidity within the Athabasca River. The plume initially contained high concentrations of nutrients (nitrogen and phosphorus), metals, and polycyclic aromatic hydrocarbons (PAHs); some Canadian Council of Ministers of the Environmental (CCME) Guidelines were exceeded in the initial days after the spill. Subsequent characterization of the source material revealed elevated concentrations of both metals (arsenic, lead, mercury, selenium, and zinc) and PAHs (acenaphthene, fluorene, naphthalene, phenanthrene, and pyrene). While toxicity testing using the released material indicated a relatively low or short-lived acute risk to the aquatic environment, some of the water quality and sediment quality variables are known carcinogens and have the potential to exert negative long-term impacts. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Metabolic Characterization of Adults with Binge Eating in the General Population: The Framingham Heart Study

    PubMed Central

    Abraham, Tobin M.; Massaro, Joseph M.; Hoffmann, D. Udo; Yanovski, Jack A.; Fox, Caroline S.

    2014-01-01

    OBJECTIVE To describe the metabolic profile of individuals with objective binge eating (OBE) and to evaluate whether associations between OBE and metabolic risk factors are mediated by body mass index (BMI). DESIGN AND METHODS Participants from the Framingham Heart Study, Third Generation and Omni 2 cohorts (n = 3551, 53.1% women, mean age 46.4 years) were screened for binge eating. We used multivariable-adjusted regression models to examine the associations of OBE with metabolic risk factors. RESULTS The prevalence of OBE was 4.8% in women and 4.9% in men. Compared to non-binge eating, OBE was associated with higher odds of hypertension (OR 1.85, 95% CI 1.32–2.60), hypertriglyceridemia (OR 1.42, 95% CI 1.01–2.01), low HDL (OR 1.70, 95% CI 1.18–2.44), insulin resistance (OR 3.18, 95% CI 2.25–4.50) and metabolic syndrome (OR 2.75, 95% CI 1.94–3.90). Fasting glucose was 7.2 mg/dl higher in those with OBE (p=0.0001). Individuals with OBE had more visceral, subcutaneous and liver fat. Most of these associations were attenuated with adjustment for BMI, with the exception of fasting glucose. CONCLUSIONS Binge eating is associated with a high burden of metabolic risk factors. Much of the associated risk appears to be mediated by BMI, with the exception of fasting glucose. PMID:25136837

  13. A Narrative Synthesis of Women's Out-of-Body Experiences During Childbirth.

    PubMed

    Bateman, Lynda; Jones, Catriona; Jomeen, Julie

    2017-07-01

    Some women have a dissociated, out-of-body experience (OBE) during childbirth, which may be described as seeing the body from above or floating above the body. This review examines this phenomenon using narratives from women who have experienced intrapartum OBEs. A narrative synthesis of qualitative research was employed to systematically synthesize OBE narratives from existing studies. Strict inclusion and exclusion criteria were applied. The included papers were critiqued by 2 of the authors to determine the appropriateness of the narrative synthesis method, procedural transparency, and soundness of the interpretive approach. Women experiencing OBEs during labor and birth report a disembodied state in the presence of stress or trauma. Three forms of OBEs are described: floating above the scene, remaining close to the scene, or full separation of a body part from the main body. Women had clear recall of OBEs, describing the experience and point of occurrence. Women who reported OBEs had experienced current or previous traumatic childbirth, or trauma in a non-birth situation. OBEs as prosaic experiences were not identified. OBEs are part of the lived experience of some women giving birth. The OBEs in this review were trauma related with some women disclosing previous posttraumatic stress disorder (PTSD). It is not evident whether there is a connection between PTSD and OBEs at present, and OBEs may serve as a potential coping mechanism in the presence of trauma. Clinicians should legitimize women's disclosure of OBEs and explore and ascertain their impact, either as a normal coping mechanism or a precursor to perinatal mental illness. Research into the function of OBEs and any relationship to PTSD may assist in early interventions for childbearing women. © 2017 by the American College of Nurse-Midwives.

  14. Outcome-based approach to medical education towards academic programmes accreditation: A review article.

    PubMed

    Mohieldein, Abdelmarouf H

    2017-03-01

    The rapid change worldwide, as a consequence of advances in science and technology, necessitates the graduation of well-qualified graduates who have the appropriate knowledge and skills to fulfill specific work requirements. Hence, redesigning academic models by focusing on educational outcomes became the target and priority for universities around the world. In this systematic review we collected and retrieved literature using a selection of electronic databases. The objectives of this report is to: 1) provide an overview of the evolution of outcome-based education (OBE), (2) illustrate the philosophy and principle of OBE, (3) list the OBE advantages and benefits, (4) describe the assessment strategies used in OBE, and (5) discuss the role of teachers and students as key elements. In conclusion, there is growing interest by the Saudi government to provide student-centered education in their institutes of higher education to graduate students with the necessary knowledge and skill experiences. Moreover, OBE is considered a holistic approach which offers a powerful and appealing way of reforming and managing medical education for mastery in learning and to meet the prerequisites for local and international accreditation.

  15. Baryon-Baryon Interactions ---Nijmegen Extended-Soft-Core Models---

    NASA Astrophysics Data System (ADS)

    Rijken, T. A.; Nagels, M. M.; Yamamoto, Y.

    We review the Nijmegen extended-soft-core (ESC) models for the baryon-baryon (BB) interactions of the SU(3) flavor-octet of baryons (N, Lambda, Sigma, and Xi). The interactions are basically studied from the meson-exchange point of view, in the spirit of the Yukawa-approach to the nuclear force problem [H. Yukawa, ``On the interaction of Elementary Particles I'', Proceedings of the Physico-Mathematical Society of Japan 17 (1935), 48], using generalized soft-core Yukawa-functions. These interactions are supplemented with (i) multiple-gluon-exchange, and (ii) structural effects due to the quark-core of the baryons. We present in some detail the most recent extended-soft-core model, henceforth referred to as ESC08, which is the most complete, sophisticated, and successful interaction-model. Furthermore, we discuss briefly its predecessor the ESC04-model [Th. A. Rijken and Y. Yamamoto, Phys. Rev. C 73 (2006), 044007; Th. A. Rijken and Y. Yamamoto, Ph ys. Rev. C 73 (2006), 044008; Th. A. Rijken and Y. Yamamoto, nucl-th/0608074]. For the soft-core one-boson-exchange (OBE) models we refer to the literature [Th. A. Rijken, in Proceedings of the International Conference on Few-Body Problems in Nuclear and Particle Physics, Quebec, 1974, ed. R. J. Slobodrian, B. Cuec and R. Ramavataram (Presses Universitè Laval, Quebec, 1975), p. 136; Th. A. Rijken, Ph. D. thesis, University of Nijmegen, 1975; M. M. Nagels, Th. A. Rijken and J. J. de Swart, Phys. Rev. D 17 (1978), 768; P. M. M. Maessen, Th. A. Rijken and J. J. de Swart, Phys. Rev. C 40 (1989), 2226; Th. A. Rijken, V. G. J. Stoks and Y. Yamamoto, Phys. Rev. C 59 (1999), 21; V. G. J. Stoks and Th. A. Rijken, Phys. Rev. C 59 (1999), 3009]. All ingredients of these latter models are also part of ESC08, and so a description of ESC08 comprises all models so far in principle. The extended-soft-core (ESC) interactions consist of local- and non-local-potentials due to (i) one-boson-exchanges (OBE), which are the members of nonets of

  16. From Special Education to an Inclusive, Outcomes-Based System.

    ERIC Educational Resources Information Center

    Naicker, Sigamoney

    2001-01-01

    This article discusses shifting from special education to inclusive, outcomes-based education (OBE) in South Africa. It examines why there is a shift toward OBE, different educational paradigms, and shifting from fundamental pedagogy to OBE. Necessary changes are highlighted, and include a shift from classification to using OBE for progression and…

  17. Outcomes Based Education Re-Examined: From Structural Functionalism to Poststructuralism.

    ERIC Educational Resources Information Center

    Capper, Colleen A.; Jamison, Michael T.

    Outcomes Based Education (OBE) is viewed as a drastic break from current educational practices and a means of providing educational success for all students. OBE is also advocated as a practice that lead to educational inequity. This paper reexamines OBE from a multiparadigm perspective of organizations and educational administration. OBE is based…

  18. Short communication: Promotion of glucagon-like peptide-2 secretion in dairy calves with a bioactive extract from Olea europaea.

    PubMed

    Morrison, S Y; Pastor, J J; Quintela, J C; Holst, J J; Hartmann, B; Drackley, J K; Ipharraguerre, I R

    2017-03-01

    Diarrhea episodes in dairy calves involve profound alterations in the mechanism controlling gut barrier function that ultimately compromise intestinal permeability to macromolecules, including pathogenic bacteria. Intestinal dysfunction models suggest that a key element of intestinal adaptation during the neonatal phase is the nutrient-induced secretion of glucagon-like peptide (GLP)-2 and associated effects on mucosal cell proliferation, barrier function, and inflammatory response. Bioactive molecules found in Olea europaea have been shown to induce the release of regulatory peptides from model enteroendocrine cells. The ability to enhance GLP-2 secretion via the feeding of putative GLP-2 secretagogues is untested in newborn calves. The objectives of this study were to determine whether feeding a bioactive extract from Olea europaea (OBE) mixed in the milk replacer (1) can stimulate GLP-2 secretion beyond the response elicited by enteral nutrients and, thereby, (2) improve intestinal permeability and animal growth as well as (3) reduce the incidence of diarrhea in preweaning dairy calves. Holstein heifer calves (n = 60) were purchased, transported to the research facility, and blocked by body weight and total serum protein and assigned to 1 of 3 treatments. Treatments were control (CON), standard milk replacer (MR) and ad libitum starter; CON plus OBE added into MR at 30 mg/kg of body weight (OBE30); and CON plus OBE added into MR at 60 mg/kg of body weight (OBE60). The concentration of GLP-2 was measured at the end of wk 2. Intestinal permeability was measured at the onset of the study and the end of wk 2 and 6, with lactulose and d-mannitol as markers. Treatments did not affect calf growth and starter intake. Compared with CON, administration of OBE60 increased the nutrient-induced response in GLP-2 by about 1 fold and reduced MR intake during the second week of study. Throughout the study, however, all calves had compromised intestinal permeability and a high

  19. Effect of weak combined static and extremely low-frequency alternating magnetic fields on spatial memory and brain amyloid-β in two animal models of Alzheimer's disease.

    PubMed

    Bobkova, Natalia V; Novikov, Vadim V; Medvinskaya, Natalia I; Aleksandrova, Irina Y; Nesterova, Inna V; Fesenko, Eugenii E

    2018-05-17

    Subchronic effect of a weak combined magnetic field (MF), produced by superimposing a constant component, 42 µT and an alternating MF of 0.08 µT, which was the sum of two frequencies of 4.38 and 4.88 Hz, was studied in olfactory bulbectomized (OBE) and transgenic Tg (APPswe, PSEN1) mice, which were used as animal models of sporadic and heritable Alzheimer's disease (AD) accordingly. Spatial memory was tested in a Morris water maze on the following day after completion of training trials with the hidden platform removed. The amyloid-β (Aβ) level was determined in extracts of the cortex and hippocampus of mice using a specific DOT analysis while the number and dimensions of amyloid plaques were detected after their staining with thioflavin S in transgenic animals. Exposure to the MFs (4 h/day for 10 days) induced the decrease of Aβ level in brain of OBE mice and reduced the number of Aβ plaques in the cortex and hippocampus of Tg animals. However, memory improvement was revealed in Tg mice only, but not in the OBE animals. Here, we suggest that in order to prevent the Aβ accumulation, MFs could be used at early stage of neuronal degeneration in case of AD and other diseases with amyloid protein deposition in other tissues.

  20. Subjective and objective binge eating in relation to eating disorder symptomatology, depressive symptoms, and self-esteem among treatment-seeking adolescents with bulimia nervosa.

    PubMed

    Fitzsimmons-Craft, Ellen E; Ciao, Anna C; Accurso, Erin C; Pisetsky, Emily M; Peterson, Carol B; Byrne, Catherine E; Le Grange, Daniel

    2014-07-01

    This study investigated the importance of the distinction between objective (OBE) and subjective binge eating (SBE) among 80 treatment-seeking adolescents with bulimia nervosa. We explored relationships among OBEs, SBEs, eating disorder (ED) symptomatology, depression, and self-esteem using two approaches. Group comparisons showed that OBE and SBE groups did not differ on ED symptoms or self-esteem; however, the SBE group had significantly greater depression. Examining continuous variables, OBEs (not SBEs) accounted for significant unique variance in global ED pathology, vomiting, and self-esteem. SBEs (not OBEs) accounted for significant unique variance in restraint and depression. Both OBEs and SBEs accounted for significant unique variance in eating concern; neither accounted for unique variance in weight/shape concern, laxative use, diuretic use, or driven exercise. Loss of control, rather than amount of food, may be most important in defining binge eating. Additionally, OBEs may indicate broader ED pathology, while SBEs may indicate restrictive/depressive symptomatology. Copyright © 2014 John Wiley & Sons, Ltd and Eating Disorders Association.

  1. Subjective and Objective Binge Eating in Relation to Eating Disorder Symptomatology, Depressive Symptoms, and Self-Esteem Among Treatment-Seeking Adolescents with Bulimia Nervosa

    PubMed Central

    Fitzsimmons-Craft, Ellen E.; Ciao, Anna C.; Accurso, Erin C.; Pisetsky, Emily M.; Peterson, Carol B.; Byrne, Catherine E.; Le Grange, Daniel

    2014-01-01

    This study investigated the importance of the distinction between objective (OBE) and subjective binge eating (SBE) among 80 treatment-seeking adolescents with bulimia nervosa (BN). We explored relationships among OBEs, SBEs, eating disorder (ED) symptomatology, depression, and self-esteem using two approaches. Group comparisons showed that OBE and SBE groups did not differ on ED symptoms or self-esteem; however, the SBE group had significantly greater depression. Examining continuous variables, OBEs (not SBEs) accounted for significant unique variance in global ED pathology, vomiting, and self-esteem. SBEs (not OBEs) accounted for significant unique variance in restraint and depression. Both OBEs and SBEs accounted for significant unique variance in eating concern; neither accounted for unique variance in weight/shape concern, laxative use, diuretic use, or driven exercise. Loss of control, rather than amount of food, may be most important in defining binge eating. Additionally, OBEs may indicate broader ED pathology while SBEs may indicate restrictive/depressive symptomatology. PMID:24852114

  2. Beyond Traditional Outcome-Based Education.

    ERIC Educational Resources Information Center

    Spady, William G.; Marshall, Kit J.

    1991-01-01

    Transitional outcome-based education lies in the twilight zone between traditional subject matter curriculum structures and planning processes and the future-role priorities inherent in transformational OBE. Districts go through incorporation, integration, and redefinition stages in implementing transitional OBE. Transformational OBE's guiding…

  3. Understanding Outcome-Based Education Changes in Teacher Education: Evaluation of a New Instrument with Preliminary Findings

    ERIC Educational Resources Information Center

    Deneen, Christopher; Brown, Gavin T. L.; Bond, Trevor G.; Shroff, Ronnie

    2013-01-01

    Outcome-based education (OBE) is a current initiative in Hong Kong universities, with widespread backing by governments and standards bodies. However, study of students' perceptions of OBE and validation of understanding these perceptions are lacking. This paper reports on the validation of an OBE-specific instrument and resulting preliminary…

  4. News Event: UK to host Science on Stage Travel: Gaining a more global perspective on physics Event: LIYSF asks students to 'cross scientific boundaries' Competition: Young Physicists' tournament is international affair Conference: Learning in a changing world of new technologies Event: Nordic physical societies meet in Lund Conference: Tenth ESERA conference to publish ebook Meeting: Rugby meeting brings teachers together Note: Remembering John L Lewis OBE

    NASA Astrophysics Data System (ADS)

    2013-03-01

    Event: UK to host Science on Stage Travel: Gaining a more global perspective on physics Event: LIYSF asks students to 'cross scientific boundaries' Competition: Young Physicists' tournament is international affair Conference: Learning in a changing world of new technologies Event: Nordic physical societies meet in Lund Conference: Tenth ESERA conference to publish ebook Meeting: Rugby meeting brings teachers together Note: Remembering John L Lewis OBE

  5. Outcome (competency) based education: an exploration of its origins, theoretical basis, and empirical evidence.

    PubMed

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-10-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the underpinnings of OBE: its historical origins, theoretical basis, and empirical evidence of its effects in order to answer the question: How can predetermined learning outcomes influence undergraduate medical education? This literature review had three components: A review of historical landmarks in the evolution of OBE; a review of conceptual frameworks and theories; and a systematic review of empirical publications from 1999 to 2010 that reported data concerning the effects of learning outcomes on undergraduate medical education. OBE had its origins in behaviourist theories of learning. It is tightly linked to the assessment and regulation of proficiency, but less clearly linked to teaching and learning activities. Over time, there have been cycles of advocacy for, then criticism of, OBE. A recurring critique concerns the place of complex personal and professional attributes as "competencies". OBE has been adopted by consensus in the face of weak empirical evidence. OBE, which has been advocated for over 50 years, can contribute usefully to defining requisite knowledge and skills, and blueprinting assessments. Its applicability to more complex aspects of clinical performance is not clear. OBE, we conclude, provides a valuable approach to some, but not all, important aspects of undergraduate medical education.

  6. Higher order thinking skills competencies required by outcomes-based education from learners.

    PubMed

    Chabeli, M M

    2006-08-01

    Outcomes-Based Education (OBE) brought about a significant paradigm shift in the education and training of learners in South Africa. OBE requires a shift from focusing on the teacher input (instruction offerings or syllabuses expressed in terms of content), to focusing on learner outcomes. OBE is moving away from 'transmission' models to constructivistic, learner-centered models that put emphasis on learning as an active process (Nieburh, 1996:30). Teachers act as facilitators and mediators of learning (Norms and Standards, Government Gazette vol 415, no 20844 of 2000). Facilitators are responsible to create the environment that is conducive for learners to construct their own knowledge, skills and values through interaction (Peters, 2000). The first critical cross-field outcome accepted by the South African Qualification Framework (SAQA) is that learners should be able to identify and solve problems by using critical and creative thinking skills. This paper seeks to explore some higher order thinking skills competencies required by OBE from learners such as critical thinking, reflective thinking, creative thinking, dialogic / dialectic thinking, decision making, problem solving and emotional intelligence and their implications in facilitating teaching and learning from the theoretical perspective. The philosophical underpinning of these higher order thinking skills is described to give direction to the study. It is recommended that a study focusing on the assessment of these intellectual concepts be made. The study may be qualitative, quantitative or mixed methods in nature (Creswell 2005).

  7. Learning outcomes as a tool to assess progression.

    PubMed

    Harden, Ronald M

    2007-09-01

    In the move to outcome-based education (OBE) much of the attention has focussed on the exit learning outcomes-the outcomes expected of a student at the end of a course of studies. It is important also to plan for and monitor students progression to the exit outcomes. A model is described for considering this progression through the phases of undergraduate education. Four dimensions are included-increasing breadth, increasing depth, increasing utility and increasing proficiency. The model can also be used to develop a blueprint for a more seamless link between undergraduate education, postgraduate training and continuing professional development. The progression model recognises the complexities of medical practice and medical education. It supports the move to student-centred and adaptive approaches to learning in an OBE environment.

  8. Fractionating the unitary notion of dissociation: disembodied but not embodied dissociative experiences are associated with exocentric perspective-taking

    PubMed Central

    Braithwaite, Jason J.; James, Kelly; Dewe, Hayley; Medford, Nick; Takahashi, Chie; Kessler, Klaus

    2013-01-01

    It has been argued that hallucinations which appear to involve shifts in egocentric perspective (e.g., the out-of-body experience, OBE) reflect specific biases in exocentric perspective-taking processes. Via a newly devised perspective-taking task, we examined whether such biases in perspective-taking were present in relation to specific dissociative anomalous body experiences (ABE) – namely the OBE. Participants also completed the Cambridge Depersonalization Scale (CDS; Sierra and Berrios, 2000) which provided measures of additional embodied ABE (unreality of self) and measures of derealization (unreality of surroundings). There were no reliable differences in the level of ABE, emotional numbing, and anomalies in sensory recall reported between the OBE and control group as measured by the corresponding CDS subscales. In contrast, the OBE group did provide significantly elevated measures of derealization (“alienation from surroundings” CDS subscale) relative to the control group. At the same time we also found that the OBE group was significantly more efficient at completing all aspects of the perspective-taking task relative to controls. Collectively, the current findings support fractionating the typically unitary notion of dissociation by proposing a distinction between embodied dissociative experiences and disembodied dissociative experiences – with only the latter being associated with exocentric perspective-taking mechanisms. Our findings – obtained with an ecologically valid task and a homogeneous OBE group – also call for a re-evaluation of the relationship between OBEs and perspective-taking in terms of facilitated disembodied experiences. PMID:24198776

  9. The body unbound: vestibular-motor hallucinations and out-of-body experiences.

    PubMed

    Cheyne, J Allan; Girard, Todd A

    2009-02-01

    Among the varied hallucinations associated with sleep paralysis (SP), out-of-body experiences (OBEs) and vestibular-motor (V-M) sensations represent a distinct factor. Recent studies of direct stimulation of vestibular cortex report a virtually identical set of bodily-self hallucinations. Both programs of research agree on numerous details of OBEs and V-M experiences and suggest similar hypotheses concerning their association. In the present study, self-report data from two on-line surveys of SP-related experiences were employed to assess hypotheses concerning the causal structure of relations among V-M experiences and OBEs during SP episodes. The results complement neurophysiological evidence and are consistent with the hypothesis that OBEs represent a breakdown in the normal binding of bodily-self sensations and suggest that out-of-body feelings (OBFs) are consequences of anomalous V-M experiences and precursors to a particular form of autoscopic experience, out-of-body autoscopy (OBA). An additional finding was that vestibular and motor experiences make relatively independent contributions to OBE variance. Although OBEs are superficially consistent with universal dualistic and supernatural intuitions about the nature of the soul and its relation to the body, recent research increasingly offers plausible alternative naturalistic explanations of the relevant phenomenology.

  10. Randomized Controlled Trial Comparing Smartphone Assisted Versus Traditional Guided Self-Help for Adults with Binge Eating

    PubMed Central

    Hildebrandt, Tom; Michaelides, Andreas; Mackinnon, Dianna; Greif, Rebecca; DeBar, Lynn; Sysko, Robyn

    2017-01-01

    Objective Guided self-help treatments based on cognitive-behavior therapy (CBT-GSH) are efficacious for binge eating. With limited availability of CBT-GSH in the community, mobile technology offers a means to increase use of these interventions. The purpose of this study was to test the initial efficacy of Noom Monitor, a smartphone application designed to facilitate CBT-GSH (CBT-GSH+Noom), on study retention, adherence, and eating disorder symptoms compared to traditional CBT-GSH. Method Sixty-six men and women with DSM-5 binge eating disorder (BED) or bulimia nervosa (BN) were randomized to receive 8 sessions of CBT-GSH + Noom (n = 33) or CBT-GSH (n = 33) over 12 weeks. Primary symptom outcomes were Eating Disorder Examination objective bulimic episodes (OBEs), subjective bulimic episodes (SBEs), and compensatory behaviors. Assessments were collected at 0, 4, 8, 12, 24, and 36 weeks. Behavioral outcomes were modeled using zero-inflated negative-binomial latent growth curve models with intent-to-treat. Results There was a significant effect of treatment on change in OBEs (β =−0.84, 95%CI = −1.49, −0.19) favoring CBT-GSH + Noom. Remission rates were not statistically different between treatments for OBEs (βlogit =−0.73, 95%CI = −1.86, 3.27; CBT-GSH + Noom = 17/27, 63.0% vs. CBT-GSH 11/27, 40.7%, NNT = 4.5), but CBT-GSH + Noom participants reported greater meal and snack adherence and regular meal adherence mediated treatment effects on OBEs. The treatments did not differ at the 6-month follow-up. Discussion Smartphone applications for the treatment binge eating appear to have advantages for adherence, a critical component of treatment dissemination. PMID:28960384

  11. Flight test maneuvers for closed loop lateral-directional modeling of the F-18 High Alpha Research Vehicle (HARV) using forebody strakes

    NASA Technical Reports Server (NTRS)

    Morelli, E. A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for lateral linear model parameter estimation at 30, 45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Strake (S) model and Strake/Thrust Vectoring (STV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specification of the time/amplitude points defining each input are included, along with plots of the input time histories.

  12. Out-of-Body Experience During Awake Craniotomy.

    PubMed

    Bos, Eelke M; Spoor, Jochem K H; Smits, Marion; Schouten, Joost W; Vincent, Arnaud J P E

    2016-08-01

    The out-of-body experience (OBE), during which a person feels as if he or she is spatially removed from the physical body, is a mystical phenomenon because of its association with near-death experiences. Literature implicates the cortex at the temporoparietal junction (TPJ) as the possible anatomic substrate for OBE. We present a patient who had an out-of-body experience during an awake craniotomy for resection of low-grade glioma. During surgery, stimulation of subcortical white matter in the left TPJ repetitively induced OBEs, in which the patient felt as if she was floating above the operating table looking down on herself. We repetitively induced OBE by subcortical stimulation near the left TPJ during awake craniotomy. Diffusion tensor imaging tractography implicated the posterior thalamic radiation as a possible substrate for autoscopic phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Randomized controlled trial comparing smartphone assisted versus traditional guided self-help for adults with binge eating.

    PubMed

    Hildebrandt, Tom; Michaelides, Andreas; Mackinnon, Dianna; Greif, Rebecca; DeBar, Lynn; Sysko, Robyn

    2017-11-01

    Guided self-help treatments based on cognitive-behavior therapy (CBT-GSH) are efficacious for binge eating. With limited availability of CBT-GSH in the community, mobile technology offers a means to increase use of these interventions. The purpose of this study was to test the initial efficacy of Noom Monitor, a smartphone application designed to facilitate CBT-GSH (CBT-GSH + Noom), on study retention, adherence, and eating disorder symptoms compared to traditional CBT-GSH. Sixty-six men and women with DSM-5 binge-eating disorder (BED) or bulimia nervosa (BN) were randomized to receive eight sessions of CBT-GSH + Noom (n = 33) or CBT-GSH (n = 33) over 12 weeks. Primary symptom outcomes were eating disorder examination objective bulimic episodes (OBEs), subjective bulimic episodes (SBEs), and compensatory behaviors. Assessments were collected at 0, 4, 8, 12, 24, and 36 weeks. Behavioral outcomes were modeled using zero-inflated negative-binomial latent growth curve models with intent-to-treat. There was a significant effect of treatment on change in OBEs (β = -0.84, 95% CI = -1.49, -0.19) favoring CBT-GSH + Noom. Remission rates were not statistically different between treatments for OBEs (β logit  = -0.73, 95% CI = -1.86, 3.27; CBT-GSH-Noom = 17/27, 63.0% vs. CBT-GSH 11/27, 40.7%, NNT = 4.5), but CBT-GSH-Noom participants reported greater meal and snack adherence and regular meal adherence mediated treatment effects on OBEs. The treatments did not differ at the 6-month follow-up. Smartphone applications for the treatment binge eating appear to have advantages for adherence, a critical component of treatment dissemination. © 2017 Wiley Periodicals, Inc.

  14. Association between objective and subjective binge eating and psychopathology during a psychological treatment trial for bulimic symptoms.

    PubMed

    Goldschmidt, Andrea B; Accurso, Erin C; Crosby, Ross D; Cao, Li; Ellison, Jo; Smith, Tracey L; Klein, Marjorie H; Mitchell, James E; Crow, Scott J; Wonderlich, Stephen A; Peterson, Carol B

    2016-12-01

    Although loss of control (LOC) while eating is a core construct of bulimia nervosa (BN), questions remain regarding its validity and prognostic significance independent of overeating. We examined trajectories of objective and subjective binge eating (OBE and SBE, respectively; i.e., LOC eating episodes involving an objectively or subjectively large amount of food) among adults participating in psychological treatments for BN-spectrum disorders (n = 80). We also explored whether changes in the frequency of these eating episodes differentially predicted changes in eating-related and general psychopathology and, conversely, whether changes in eating-related and general psychopathology predicted differential changes in the frequency of these eating episodes. Linear mixed models with repeated measures revealed that OBE decreased twice as rapidly as SBE throughout treatment and 4-month follow-up. Generalized linear models revealed that baseline to end-of-treatment reductions in SBE frequency predicted baseline to 4-month follow-up changes in eating-related psychopathology, depression, and anxiety, while changes in OBE frequency were not predictive of psychopathology at 4-month follow-up. Zero-inflation models indicated that baseline to end-of-treatment changes in eating-related psychopathology and depression symptoms predicted baseline to 4-month follow-up changes in OBE frequency, while changes in anxiety and self-esteem did not. Baseline to end-of-treatment changes in eating-related psychopathology, self-esteem, and anxiety predicted baseline to 4-month follow-up changes in SBE frequency, while baseline to end-of-treatment changes in depression did not. Based on these findings, LOC accompanied by objective overeating may reflect distress at having consumed an objectively large amount of food, whereas LOC accompanied by subjective overeating may reflect more generalized distress related to one's eating- and mood-related psychopathology. BN treatments should

  15. Subjective and Objective Binge Eating in Relation to Eating Disorder Symptomatology, Negative Affect, and Personality Dimensions

    PubMed Central

    Brownstone, Lisa M.; Bardone-Cone, Anna M.; Fitzsimmons-Craft, Ellen E.; Printz, Katherine S.; Le Grange, Daniel; Mitchell, James E.; Crow, Scott J.; Peterson, Carol B.; Crosby, Ross D.; Klein, Marjorie H.; Wonderlich, Stephen A.; Joiner, Thomas E.

    2013-01-01

    Objective The current study explored the clinical meaningfulness of distinguishing subjective (SBE) from objective binge eating (OBE) among individuals with threshold/subthreshold bulimia nervosa (BN). We examined relations between OBEs and SBEs and eating disorder symptoms, negative affect, and personality dimensions using both a group comparison and a continuous approach. Method Participants were 204 adult females meeting criteria for threshold/subthreshold BN who completed questionnaires related to disordered eating, affect, and personality. Results Group comparisons indicated that SBE and OBE groups did not significantly differ on eating disorder pathology or negative affect, but did differ on two personality dimensions (cognitive distortion and attentional impulsivity). Using the continuous approach, we found that frequencies of SBEs (not OBEs) accounted for unique variance in weight/shape concern, diuretic use frequency, depressive symptoms, anxiety, social avoidance, insecure attachment, and cognitive distortion. Discussion SBEs in the context of BN may indicate broader areas of psychopathology. PMID:23109272

  16. Transitions and Transformations in Philippine Physics Education Curriculum: A Case Research

    ERIC Educational Resources Information Center

    Morales, Marie Paz E.

    2017-01-01

    Curriculum, curricular transition and reform define transformational outcome-based education (OBE) in the Philippine education system. This study explores how alignment may be done with a special physics education program to suit the OBE curricular agenda for pre-service physics education, known as an outcome-based teacher education curriculum…

  17. Walter Laing Macdonald Perry KT OBE, Barron Perry of Walton, 21 June 1921 - 17 July 2003.

    PubMed

    Kelly, John S; Horlock, John H

    2004-01-01

    of his distinguished careers came with a succession of honours; OBE in 1957, Knight Bachelor in 1974 and Baron in 1979; 10 honorary degrees from UK, North American, College London; the Wellcome Gold Medal in 1993 and Inaugural Royal Medal of the Royal Society of Edinburgh in 2000. He was Chairman, President or member of numerous commercial, educational, public interest and scientific bodies. Lord Perry's publications included sole or part authorship of approximately 90 books, research papers and abstracts. Shining through of Walter Perry's careers are strengths of commitment and sheer hard work, rigorous analysis of scientific, educational and organizational problems, experimentation and pursuit of clear objectives. Against scepticism, elitism and ill-informed criticism he drove through the establishment of the Open University. It is today respected internationally, is by some orders of magnitude our largest university in terms of student enrollment and is demonstrably successful outcome from an experiment initiated 40 years ago. It represents a fine monument to Walter Perry.

  18. Developing a Learning Outcome-Based Question Examination Paper Tool for Universiti Putra Malaysia

    ERIC Educational Resources Information Center

    Hassan, Sa'adah; Admodisastro, Novia Indriaty; Kamaruddin, Azrina; Baharom, Salmi; Pa, Noraini Che

    2016-01-01

    Much attention is now given on producing quality graduates. Therefore, outcome-based education (OBE) in teaching and learning is now being implemented in Malaysia at all levels of education especially at higher education institutions. For implementing OBE, the design of curriculum and courses should be based on specified outcomes. Thus, the…

  19. Services for All: Are Outcome-Based Education and Flexible School Structures the Answer?

    ERIC Educational Resources Information Center

    Smith, Sarah J.

    1995-01-01

    This paper discusses the recent controversy over outcome-based education (OBE), arguing that while OBE may be correct in establishing high standards for student learning, its implementation has tended to establish rigid "assembly line" approaches to teaching. A call is made for more flexible and individualized systems that respond to…

  20. Parameter Identification Flight Test Maneuvers for Closed Loop Modeling of the F-18 High Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Batterson, James G. (Technical Monitor); Morelli, E. A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.

  1. Modelling with Integer Variables.

    DTIC Science & Technology

    1984-01-01

    Computational Comparison of * ’Equivalent’ Mixed Integer Formulations," Naval Research Logistics Quarterly 28 (1981), pp. 115- 131 . 39. R. R, Meyer and...jE(i) 3 K ".- .e I " Z A . .,.. x jCI (i) IJ ~s ;:. ... i=I 1 1X. integer A- k . . . . . . . . . . . ... . ... . . . . . . . . . o...be such that Z X.. = 1 andIfxCi’e k jcI (i) 11 13 kx m). *x + E okv . Then by putting Xil and X.=O for j* i, j£I(i) kE (2.3.4) holds. Hence S’ Pi" As

  2. Value and benefits of open-book examinations as assessment for deep learning in a post-graduate animal health course.

    PubMed

    Dale, Vicki H M; Wieland, Barbara; Pirkelbauer, Birgit; Nevel, Amanda

    2009-01-01

    This study provides an overview of the perceptions of alumni in relation to their experience of open-book examinations (OBEs) as post-graduate students. This type of assessment was introduced as a way of allowing these adult learners to demonstrate their conceptual understanding and ability to apply knowledge in practice, which in theory would equip them with problem-solving skills required for the workplace. This study demonstrates that alumni-shown to be predominantly deep learners-typically regarded OBEs as less stressful than closed-book examinations, and as an effective way to assess the application of knowledge to real-life problems. Additional staff training and student induction, particularly for international students, are suggested as a means of improving the acceptability and effectiveness of OBEs.

  3. Olive oil bioactives protect pigs against experimentally-induced chronic inflammation independently of alterations in gut microbiota

    PubMed Central

    Liehr, Martin; Mereu, Alessandro; Pastor, Jose Javier; Quintela, Jose Carlos; Staats, Stefanie; Rimbach, Gerald; Ipharraguerre, Ignacio Rodolfo

    2017-01-01

    Subclinical chronic inflammation (SCI) is associated with impaired animal growth. Previous work has demonstrated that olive-derived plant bioactives exhibit anti-inflammatory properties that could possibly counteract the growth-depressing effects of SCI. To test this hypothesis and define the underlying mechanism, we conducted a 30-day study in which piglets fed an olive-oil bioactive extract (OBE) and their control counterparts (C+) were injected repeatedly during the last 10 days of the study with increasing doses of Escherichia coli lipopolysaccharides (LPS) to induce SCI. A third group of piglets remained untreated throughout the study and served as a negative control (C-). In C+ pigs, SCI increased the circulating concentration of interleukin 1 beta (p < 0.001) and decreased feed ingestion (p < 0.05) and weight gain (p < 0.05). These responses were not observed in OBE animals. Although intestinal inflammation and colonic microbial ecology was not altered by treatments, OBE enhanced ileal mRNA abundance of tight and adherens junctional proteins (p < 0.05) and plasma recovery of mannitol (p < 0.05) compared with C+ and C-. In line with these findings, OBE improved transepithelial electrical resistance (p < 0.01) in TNF-α-challenged Caco-2/TC-7 cells, and repressed the production of inflammatory cytokines (p < 0.05) in LPS-stimulated macrophages. In summary, this work demonstrates that OBE attenuates the suppressing effect of SCI on animal growth through a mechanism that appears to involve improvements in intestinal integrity unrelated to alterations in gut microbial ecology and function. PMID:28346507

  4. A randomized trial of transdermal and oral estrogen therapy in adolescent girls with hypogonadism.

    PubMed

    Shah, Sejal; Forghani, Nikta; Durham, Eileen; Neely, E Kirk

    2014-01-01

    Adolescent females with ovarian failure require estrogen therapy for induction of puberty and other important physiologic effects. Currently, health care providers have varying practices without evidence-based standards, thus investigating potential differences between oral and transdermal preparations is essential. The purpose of this study was to compare the differential effects of treatment with oral conjugated equine estrogen (OCEE), oral 17β estradiol (OBE), or transdermal 17β estradiol (TBE) on biochemical profiles and feminization in girls with ovarian failure. 20 prepubertal adolescent females with ovarian failure, ages 12-18 years, were randomized to OCEE (n = 8), OBE (n = 7), or TBE (n = 5) for 24 months. Estrogen replacement was initiated at a low dose (0.15 mg OCEE, 0.25 mg OBE, or 0.0125 mg TBE) and doubled every 6 months to a maximum dose of 0.625 mg/d OCEE, 1 mg/d OBE, or 0.05 mg/d TBE. At 18 months, micronized progesterone was added to induce menstrual cycles. Biochemical markers including sex hormones, inflammatory markers, liver enzymes, coagulation factors, and lipids were obtained at baseline and 6 month intervals. Differences in levels of treatment parameters between the groups were evaluated with one-way analysis of variance (ANOVA). The effect of progesterone on biochemical markers was evaluated with the paired t-test. Mean (±SE) estradiol levels at maximum estrogen dose (18 months) were higher in the TBE group (53 ± 19 pg/mL) compared to OCEE (14 ± 5 pg/mL) and OBE (12 ± 5 pg/mL) (p ≤ 0.01). The TBE and OBE groups had more effective feminization (100% Tanner 3 breast stage at 18 months). There were no statistical differences in other biochemical markers between treatment groups at 18 months or after the introduction of progesterone. Treatment with transdermal 17β estradiol resulted in higher estradiol levels and more effective feminization compared to oral conjugated equine estrogen but

  5. A randomized trial of transdermal and oral estrogen therapy in adolescent girls with hypogonadism

    PubMed Central

    2014-01-01

    Background Adolescent females with ovarian failure require estrogen therapy for induction of puberty and other important physiologic effects. Currently, health care providers have varying practices without evidence-based standards, thus investigating potential differences between oral and transdermal preparations is essential. The purpose of this study was to compare the differential effects of treatment with oral conjugated equine estrogen (OCEE), oral 17β estradiol (OBE), or transdermal 17β estradiol (TBE) on biochemical profiles and feminization in girls with ovarian failure. Study design 20 prepubertal adolescent females with ovarian failure, ages 12–18 years, were randomized to OCEE (n = 8), OBE (n = 7), or TBE (n = 5) for 24 months. Estrogen replacement was initiated at a low dose (0.15 mg OCEE, 0.25 mg OBE, or 0.0125 mg TBE) and doubled every 6 months to a maximum dose of 0.625 mg/d OCEE, 1 mg/d OBE, or 0.05 mg/d TBE. At 18 months, micronized progesterone was added to induce menstrual cycles. Biochemical markers including sex hormones, inflammatory markers, liver enzymes, coagulation factors, and lipids were obtained at baseline and 6 month intervals. Differences in levels of treatment parameters between the groups were evaluated with one-way analysis of variance (ANOVA). The effect of progesterone on biochemical markers was evaluated with the paired t-test. Results Mean (±SE) estradiol levels at maximum estrogen dose (18 months) were higher in the TBE group (53 ± 19 pg/mL) compared to OCEE (14 ± 5 pg/mL) and OBE (12 ± 5 pg/mL) (p ≤ 0.01). The TBE and OBE groups had more effective feminization (100% Tanner 3 breast stage at 18 months). There were no statistical differences in other biochemical markers between treatment groups at 18 months or after the introduction of progesterone. Conclusions Treatment with transdermal 17β estradiol resulted in higher estradiol levels and more effective feminization

  6. Clinical and economic characteristics associated with type 2 diabetes.

    PubMed

    Sicras-Mainar, A; Navarro-Artieda, R; Ibáñez-Nolla, J

    2014-04-01

    Type 2 diabetes mellitus (DM2) is usually accompanied by various comorbidities that can increase the cost of treatment. We are not aware of studies that have determined the costs associated with treating DM2 patients with co-morbidities such as overweight (OW), obesity (OBE) or arterial hypertension (AHT). The aim of the study was to examine the health-related costs and the incidence of cardiovascular disease (CVD) in these patients. Multicenter, observational retrospective design. We included patients 40-99 years of age who requested medical attention in 2010 in Badalona (Barcelona, Spain). There were two study groups: those with DM2 and without DM2 (reference group/control), and six subgroups: DM2-only, DM2-AHT, DM2-OW, DM2-OBE; DM2-AHT-OW and DM2-AHT-OBE. The main outcome measures were: co-morbidity, metabolic syndrome (MS), complications (hypoglycemia, CVD) and costs (health and non-health). Follow-up was carried out for two years. A total of 26,845 patients were recruited. The prevalence of DM2 was 14.0%. Subjects with DM2 were older (67.8 vs. 59.7 years) and more were men (51.3 vs. 43.0%), P<.001. DM2 status was associated primarily with OBE (OR=2.8, CI=2.4-3.1), AHT (OR=2.4, CI=2.2-2.6) and OW (OR=1.9, CI=1.7-2.2). The distribution by subgroups was: 6.7% of patients had only DM2, 26.1% had DM2, AHT and OW, and 34.1% had DM2, AHT, and OBE. Some 75.4% had MS and 37.5% reported an episode of hypoglycemia. The total cost/patient with DM2 was €4,458. By subgroups the costs were as follows: DM2: €3,431; DM2-AHT: €4,075; DM2-OW: €4,057; DM2-OBE: €4,915; DM2-AHT-OW: €4,203 and DM2-AHT-OBE: €5,021, P<.001. The CVD rate among patients with DM2 was 4.7 vs. 1.7% in those without DM2 P<.001. Obesity is a comorbidity associated with DM2 that leads to greater healthcare costs than AHT. The presence of these comorbidities causes increased rates of CVD. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  7. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.

    PubMed

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-03-09

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.

  8. The Impact of the 6:3 Polyunsaturated Fatty Acid Ratio on Intermediate Markers of Breast Cancer

    DTIC Science & Technology

    2008-05-01

    randomized clinical trial of a standard versus vegetarian diet for weight loss: the impact of treatment preference. Int J Obes. 2008; 32: 166-76. o...Burke L, Hudson A, Styn M, Warziski M, Ulci O, Sereika S. Effects of a vegetarian diet and treatment preference on biological and dietary variables in...a standard versus vegetarian diet for weight loss: the impact of treatment preference. Int J Obes. 2008; 32: 166-76. Burke L, Hudson A, Styn M

  9. The effectiveness of outcome based education on the competencies of nursing students: A systematic review.

    PubMed

    Tan, Katherine; Chong, Mei Chan; Subramaniam, Pathmawathy; Wong, Li Ping

    2018-05-01

    Outcome Based Education (OBE) is a student-centered approach of curriculum design and teaching that emphasize on what learners should know, understand, demonstrate and how to adapt to life beyond formal education. However, no systematic review has been seen to explore the effectiveness of OBE in improving the competencies of nursing students. To appraise and synthesize the best available evidence that examines the effectiveness of OBE approaches towards the competencies of nursing students. A systematic review of interventional experimental studies. Eight online databases namely CINAHL, EBSCO, Science Direct, ProQuest, Web of Science, PubMed, EMBASE and SCOPUS were searched. Relevant studies were identified using combined approaches of electronic database search without geographical or language filters but were limited to articles published from 2006 to 2016, handsearching journals and visually scanning references from retrieved studies. Two reviewers independently conducted the quality appraisal of selected studies and data were extracted. Six interventional studies met the inclusion criteria. Two of the studies were rated as high methodological quality and four were rated as moderate. Studies were published between 2009 and 2016 and were mostly from Asian and Middle Eastern countries. Results showed that OBE approaches improves competency in knowledge acquisition in terms of higher final course grades and cognitive skills, improve clinical skills and nursing core competencies and higher behavioural skills score while performing clinical skills. Learners' satisfaction was also encouraging as reported in one of the studies. Only one study reported on the negative effect. Although OBE approaches does show encouraging effects towards improving competencies of nursing students, more robust experimental study design with larger sample sizes, evaluating other outcome measures such as other areas of competencies, students' satisfaction, and patient outcomes are needed

  10. Latent Profile Analysis to Determine the Typology of Disinhibited Eating Behaviors in Children and Adolescents

    PubMed Central

    Vannucci, Anna; Tanofsky-Kraff, Marian; Crosby, Ross D.; Ranzenhofer, Lisa M.; Shomaker, Lauren B.; Field, Sara E.; Mooreville, Mira; Reina, Samantha A.; Kozlosky, Merel; Yanovski, Susan Z.; Yanovski, Jack A.

    2012-01-01

    Objective We used latent profile analysis (LPA) to classify children and adolescents into subtypes based on the overlap of disinhibited eating behaviors—eating in the absence of hunger, emotional eating, and subjective and objective binge eating. Method Participants were 411 youth (8–18y) from the community who reported on their disinhibited eating patterns. A subset (n=223) ate ad libitum from two test meals. Results LPA produced five subtypes that were most prominently distinguished by objective binge eating (OBE; n=53), subjective binge eating (SBE; n=59), emotional eating (EE; n=62), a mix of emotional eating and eating in the absence of hunger (EE-EAH; n=172), and no disinhibited eating (No-DE; n=64). Accounting for age, sex, race, BMI-z, the four disinhibited eating groups had more problem behaviors than no disinhibited eating (p=.001). OBE and SBE subtypes had greater BMI-z, percent fat mass, disordered eating attitudes, and trait anxiety than EE, EAH-EE, and No-DE subtypes (ps<.01). However, the OBE subtype reported the highest eating concern (p<.001) and the OBE, SBE, and EE subtypes reported higher depressive symptoms than EE-EAH and No-DE subtypes. Across both test meals, OBE and SBE consumed less percent protein and higher percent carbohydrate than the other subtypes (ps<.02), adjusting for age, sex, race, height, lean mass, percent fat mass, and total intake. EE also consumed greater percent carbohydrate and lower percent fat compared than EE-EAH and No-DE (ps<.03). The SBE subtype consumed the least total calories (p=.01). Discussion We conclude that behavioral subtypes of disinhibited eating may be distinguished by psychological characteristics and objective eating behavior. Prospective data are required to determine whether subtypes predict the onset of eating disorders and obesity. PMID:23276121

  11. Outcome based education enacted: teachers' tensions in balancing between student learning and bureaucracy.

    PubMed

    Barman, Linda; Silén, Charlotte; Bolander Laksov, Klara

    2014-12-01

    This paper reports on how teachers within health sciences education translate outcome-based education (OBE) into practice when they design courses. The study is an empirical contribution to the debate about outcome- and competency-based approaches in health sciences education. A qualitative method was used to study how teachers from 14 different study programmes designed courses before and after OBE was implemented. Using an interpretative approach, analysis of documents and interviews was carried out. The findings show that teachers enacted OBE either to design for more competency-oriented teaching-learning, or to further detail knowledge and thus move towards reductionism. Teachers mainly understood the outcome-based framework as useful to support students' learning, although the demand for accountability created tension and became a bureaucratic hindrance to design for development of professional competence. The paper shows variations of how teachers enacted the same outcome-based framework for instructional design. These differences can add a richer understanding of how outcome- or competency-based approaches relate to teaching-learning at a course level.

  12. Health-service Use in Women with Binge Eating Disorders

    PubMed Central

    Dickerson, John; DeBar, Lynn; Perrin, Nancy A.; Lynch, Frances; Wilson, G. Terence; Rosselli, Francine; Kraemer, Helena C.; Striegel-Moore, Ruth H.

    2014-01-01

    Objective To compare health-care utilization between participants who met DSM-IV criteria for Binge Eating Disorder (BED) and those engaged in Recurrent Binge Eating (RBE) and to evaluate whether objective binge eating (OBE) days, a key measurement for diagnosing BED, predicted health-care costs. Method We obtained utilization and cost data from electronic medical records to augment patient reported data for 100 adult female members of a large health maintenance organization (HMO) who were enrolled in a randomized clinical trial to treat binge eating. Results Total costs did not differ between the BED and RBE groups (β=−0.117, z=−0.48, p=0.629), nor did the number of OBE days predictor total costs (β= −0.017, z=−1.01, p=0.313). Conclusions Findings suggest that the medical impairment, as assessed through health care costs, caused by BED may not be greater than impairment caused by RBE. The current threshold number of two OBE days/week as a criterion for BED may need to be reconsidered PMID:21823138

  13. OD (Organization Development) Interventions that Enhance Equal Opportunity.

    DTIC Science & Technology

    1983-09-01

    aide to aaesam mod Identify by week nim obe) Socialization process Socialization model Self-esteem Organization form and structure Equal Opportunity...itself speaks to the way individuals are socialized into the Navy or a perceived lack of socialization . 7 pi ’..7 ’ .. . . 7 Today the Equal ... equal opportunity. Analysis of five different dimensions of the socialization process can be thought of as distinct "tactics" which managers (agents

  14. Modeling of beryllium sputtering and re-deposition in fusion reactor plasma facing components

    NASA Astrophysics Data System (ADS)

    Zimin, A. M.; Danelyan, L. S.; Elistratov, N. G.; Gureev, V. M.; Guseva, M. I.; Kolbasov, B. N.; Kulikauskas, V. S.; Stolyarova, V. G.; Vasiliev, N. N.; Zatekin, V. V.

    2004-08-01

    Quantitative characteristics of Be-sputtering by hydrogen isotope ions in a magnetron sputtering system, the microstructure and composition of the sputtered and re-deposited layers were studied. The energies of H + and D + ions varied from 200 to 300 eV. The ion flux density was ˜3 × 10 21 m -2 s -1. The irradiation doses were up to 4 × 10 25 m -2. For modeling of the sputtered Be-atom re-deposition at increased deuterium pressures (up to 0.07 torr), a mode of operation with their effective return to the Be-target surface was implemented. An atomic ratio O/Be ≅ 0.8 was measured in the re-deposited layers. A ratio D/Be decreases from 0.15 at 375 K to 0.05 at 575 K and slightly grows in the presence of carbon and tungsten. The oxygen concentration in the sputtered layers does not exceed 3 at.%. The atomic ratio D/Be decreases there from 0.07 to 0.03 at target temperatures increase from 350 to 420 K.

  15. A Virtual Out-of-Body Experience Reduces Fear of Death

    PubMed Central

    2017-01-01

    Immersive virtual reality can be used to visually substitute a person’s real body by a life-sized virtual body (VB) that is seen from first person perspective. Using real-time motion capture the VB can be programmed to move synchronously with the real body (visuomotor synchrony), and also virtual objects seen to strike the VB can be felt through corresponding vibrotactile stimulation on the actual body (visuotactile synchrony). This setup typically gives rise to a strong perceptual illusion of ownership over the VB. When the viewpoint is lifted up and out of the VB so that it is seen below this may result in an out-of-body experience (OBE). In a two-factor between-groups experiment with 16 female participants per group we tested how fear of death might be influenced by two different methods for producing an OBE. In an initial embodiment phase where both groups experienced the same multisensory stimuli there was a strong feeling of body ownership. Then the viewpoint was lifted up and behind the VB. In the experimental group once the viewpoint was out of the VB there was no further connection with it (no visuomotor or visuotactile synchrony). In a control condition, although the viewpoint was in the identical place as in the experimental group, visuomotor and visuotactile synchrony continued. While both groups reported high scores on a question about their OBE illusion, the experimental group had a greater feeling of disownership towards the VB below compared to the control group, in line with previous findings. Fear of death in the experimental group was found to be lower than in the control group. This is in line with previous reports that naturally occurring OBEs are often associated with enhanced belief in life after death. PMID:28068368

  16. Development and Validation of the Eating Loss of Control Scale

    PubMed Central

    Blomquist, Kerstin K.; Roberto, Christina A.; Barnes, Rachel D.; White, Marney A.; Masheb, Robin M.; Grilo, Carlos M.

    2014-01-01

    Recurrent objective bulimic episodes (OBE) are a defining diagnostic characteristic of binge eating disorder (BED) and bulimia nervosa (BN). OBEs are characterized by experiencing loss of control (LOC) while eating an unusually large quantity of food. Despite nosological importance and complex heterogeneity across patients, measurement of LOC has been assessed dichotomously (present/absent). This study describes the development and initial validation of the Eating Loss of Control Scale (ELOCS), a self-report questionnaire that examines the complexity of the LOC construct. Participants were 168 obese treatment-seeking individuals with BED who completed the Eating Disorder Examination interview and self-report measures. Participants rated their LOC-related feelings or behaviors on continuous Likert-type scales and reported the number of LOC episodes in the past 28 days. Principal component analysis identified a single-factor, 18-item scale, which demonstrated good internal reliability (α=0.90). Frequency of LOC episodes was significantly correlated with frequency of OBEs and subjective bulimic episodes. The ELOCS demonstrated good convergent validity and was significantly correlated with greater eating pathology, greater emotion dysregulation, greater depression, and lower self-control, but not with BMI. The findings suggest that the ELOCS is a valid self-report questionnaire that may provide important clinical information regarding experiences of LOC in obese persons with BED. Future research should examine the ELOCS in other eating disorders and non-clinical samples. PMID:24219700

  17. Hα imaging for BeXRBs in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Maravelias, G.; Zezas, A.; Antoniou, V.; Hatzidimitriou, D.; Haberl, F.

    2017-11-01

    The Small Magellanic Cloud (SMC) hosts a large number of high-mass X-ray binaries, and in particular of Be/X-ray Binaries (BeXRBs; neutron stars orbiting OBe-type stars), offering a unique laboratory to address the effect of metalicity. One key property of their optical companion is Hα in emission, which makes them bright sources when observed through a narrow-band Hα filter. We performed a survey of the SMC Bar and Wing regions using wide-field cameras (WFI@MPG/ESO and MOSAIC@CTIO/Blanco) in order to identify the counterparts of the sources detected in our XMM-Newton survey of the same area. We obtained broad-band R and narrow-band Hα photometry, and identified ~10000 Hα emission sources down to a sensitivity limit of 18.7 mag (equivalent to ~B8 type Main Sequence stars). We find the fraction of OBe/OB stars to be 13% down to this limit, and by investigating this fraction as a function of the brightness of the stars we deduce that Hα excess peaks at the O9-B2 spectral range. Using the most up-to-date numbers of SMC BeXRBs we find their fraction over their parent population to be ~0.002 - 0.025 BeXRBs/OBe, a direct measurement of their formation rate.

  18. Herbal remedies and supplements for weight loss

    MedlinePlus

    Weight loss - herbal remedies and supplements; Obesity - herbal remedies; Overweight - herbal remedies ... A, Gutiérrez-Salmeán G. New dietary supplements for obesity: what we currently know. Curr Obes Rep . 2016; ...

  19. Educational strategies for the prevention of diabetes, hypertension, and obesity.

    PubMed

    Machado, Alexandre Paulo; Lima, Bruno Muniz; Laureano, Monique Guilharducci; Silva, Pedro Henrique Bauth; Tardin, Giovanna Pereira; Reis, Paulo Silva; Santos, Joyce Sammara; Jácomo, Domingos; D'Artibale, Eliziana Ferreira

    2016-11-01

    The main goal of this work was to produce a review of educational strategies to prevent diabetes, hypertension, and obesity. PubMed database was consulted using combined descriptors such as [Prevention], [Educational Activities], [Diabetes], [Hypertension], and [Obesity]. Data from randomized trials published between 2002 and 2014 were included in spreadsheets for analysis in duplicate by the reviewers. A total of 8,908 articles were found, of which 1,539 were selected about diabetes mellitus (DM, n=369), arterial systemic hypertension (ASH, n=200), and obesity (OBES, n=970). The number of free full text articles available was 1,075 (DM = 276, ASH = 118 and OBES = 681). In most of these studies, demographic characteristics such as gender and age were randomized, and the population mainly composed by students, ethnic groups, family members, pregnant, health or education professionals, patients with chronic diseases (DM, ASH, OBES) or other comorbidities. Group dynamics, physical activity practices, nutritional education, questionnaires, interviews, employment of new technologies, people training and workshops were the main intervention strategies used. The most efficient interventions occurred at community level, whenever the intervention was permanent or maintained for long periods, and relied on the continuous education of community health workers that had a constant interference inside the population covered. Many studies focused their actions in children and adolescents, especially on students, because they were more influenced by educational activities of prevention, and the knowledge acquired by them would spread more easily to their family and to society.

  20. Management of accidental exposure to HIV: the COREVIH 2011 activity report.

    PubMed

    Rouveix, E; Bouvet, E; Vernat, F; Chansombat, M; Hamet, G; Pellissier, G

    2014-03-01

    Post-exposure prophylaxis (PEP) relies on procedures allowing quick access to treatment in case of accidental exposure to viral risk (AEV). Occupational blood exposure (OBE) affects mainly caregivers; these accidents are monitored and assessed by the inter-regional center for nosocomial infections (C-CLIN), occupational physicians, and infection control units. They are classified apart from sexual exposure for which there is currently no monitoring. Data was extracted from the COREVIH (steering committee for the prevention of HIV infection) 2011 activity reports (AR), available online. Data collection was performed using a standardized grid. Twenty-four out of 28 AR were available online. Nine thousand nine hundred and twenty AEV were reported, 44% of OBE, and 56% of sexual and other exposures. PEP was prescribed in 8% of OBE and in 77% of sexual exposures. The type of PEP was documented in 52% of the cases. Follow-up was poorly documented. AR provide an incomplete and heterogeneous review of exposure management without any standardized data collection. The difficulties encountered in data collection and monitoring are due to differences in care centers (complex patient circuits, multiple actors) and lack of common dedicated software. Sexual exposures account for 50% of AEV and most are treated; but they are incompletely reported and consequently not analyzed at the regional or national level. A typical AR collection grid is being studied in 2 COREVIH, with the objective to improve collection and obtain useful national data. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  1. Complementary And Alternative Medicine In The Military Health System

    DTIC Science & Technology

    2017-01-01

    MTFs reported using diet therapy most often for various types of chronic disease : mainly obe- sity (80 percent of MTFs), diabetes (77 percent), heart ...7 Strengths and Limitations of Our Study ........................................................................................... 8 CHAPTER...42 Training of New CAM Providers

  2. Structure-function analysis of diacylglycerol acyltransferase sequences from 70 organisms

    USDA-ARS?s Scientific Manuscript database

    Diacylglycerol acyltransferases (DGATs) catalyze the final and rate-limiting step of triacylglycerol (TAG) biosynthesis in eukaryotic organisms. Understanding the roles of DGATs will help to create transgenic plants with value-added properties and provide clues for therapeutic intervention for obes...

  3. Dietary DHA reduced downstream endocannabinoid and inflammatory gene expression, epididymal fat mass, and improved aspects of glucose use in muscle in C57BL/6J mice

    USDA-ARS?s Scientific Manuscript database

    Objective: Endocannabinoid system (ECS) overactivation is associated with increased adiposity and likely contributes to type II diabetes risk. Elevated tissue cannabinoid receptor 1 (CB1) and circulating endocannabinoids derived from the n-6 polyunsaturated acid (PUFA) arachidonic acid occur in obes...

  4. 'Have confidence in yourself'.

    PubMed

    2016-06-29

    Former director of nursing at Royal Brompton & Harefield NHS Foundation Trust Caroline Shuldham OBE left the NHS last year to work independently. She celebrates 45 years in nursing this year and is involved in research, teaching, mentoring, inspection and advising on care.

  5. Another Breakthrough, Another Baby Thrown out with the Bathwater

    ERIC Educational Resources Information Center

    Bell, David M.

    2009-01-01

    "Process-oriented pedagogy: facilitation, empowerment, or control?" claims that process-oriented pedagogy (POP) represents the methodological perspective of most practising teachers and that outcomes-based education (OBE) poses a real and present danger to stakeholder autonomy. Whereas POP may characterize methodological practices in the inner…

  6. Adapting an Outcome-Based Education Development Process to Meet Near Real-Time Challenges to Sustainable Agricultural Production

    ERIC Educational Resources Information Center

    Halbleib, Mary L.; Jepson, Paul C.

    2015-01-01

    Purpose: This paper examines the benefits of using an outcome-based education (OBE) method within agricultural extension outreach programmes for professional and farmer audiences. Design/Methodology/Approach: The method is elaborated through two practical examples, which show that focused, short-duration programmes can produce meaningful skill…

  7. Cognitive-behavioral therapy for subthreshold bulimia nervosa: A case series.

    PubMed

    Peterson, C B; Miller, K B; Willer, M G; Ziesmer, J; Durkin, N; Arikian, A; Crow, S J

    2011-09-01

    The extent to which cognitive-behavioral therapy (CBT) is helpful in treating individuals with bulimic symptoms who do not meet full criteria for bulimia nervosa is unclear. The purpose of this investigation was to examine the potential efficacy of CBT for eating disorder individuals with bulimic symptoms who do not meet full criteria for bulimia nervosa. Twelve participants with subthreshold bulimia nervosa were treated in a case series with 20 sessions of CBT. Ten of the 12 participants (83.3%) completed treatment. Intent-to-treat abstinent percentages were 75.0% for objectively large episodes of binge eating (OBEs), 33.3% for subjectively large episodes of binge eating (SBEs), and 50% for purging at end of treatment. At one year follow-up, 66.7% were abstinent for OBEs, 41.7% for SBEs, and 50.0% for purging. The majority also reported improvements in associated symptoms. This case series provides support for the use of CBT with individuals with subthreshold bulimia nervosa.

  8. The Evolution of Electronic Pedagogy in an Outcome Based Learning Environment: Learning, Teaching, and the Culture of Technology at California's Newest University--CSU Monterey Bay.

    ERIC Educational Resources Information Center

    Baldwin, George

    California State University Monterey Bay (CSUMB) is the newest university in the CSU system. CSUMB's vision statement distinguishes the institution from others in the system by promoting learning paradigms of Outcome Based Education (OBE) and communication technologies of distributed learning (DL). Faculty are committed to the experimental use of…

  9. Predicting weight status stability and change from fifth grade to eighth grade: the significant role of adolescents' social-emotional well-being.

    PubMed

    Chang, Yiting; Gable, Sara

    2013-04-01

    The primary objective of this study was to predict weight status stability and change across the transition to adolescence using parent reports of child and household routines and teacher and child self-reports of social-emotional development. Data were from the Early Childhood Longitudinal Study-Kindergarten Cohort (ECLS-K), a nationally representative sample of children who entered kindergarten during 1998-1999 and were followed through eighth grade. At fifth grade, parents reported on child and household routines and the study child and his/her primary classroom teacher reported on the child's social-emotional functioning. At fifth and eighth grade, children were directly weighed and measured at school. Nine mutually-exclusive weight trajectory groups were created to capture stability or change in weight status from fifth to eighth grade: (1) stable obese (ObeSta); (2) obese to overweight (ObePos1); (3) obese to healthy (ObePos2); (4) stable overweight (OverSta); (5) overweight to healthy (OverPos); (6) overweight to obese (OverNeg); (7) stable healthy (HelSta); (8) healthy to overweight (HelNeg1); and (9) healthy to obese (HelNeg2). Except for breakfast consumption at home, school-provided lunches, nighttime sleep duration, household and child routines did not predict stability or change in weight status. Instead, weight status trajectory across the transition to adolescence was significantly predicted by measures of social-emotional functioning at fifth grade. Assessing children's social-emotional well-being in addition to their lifestyle routines during the transition to adolescence is a noteworthy direction for adolescent obesity prevention and intervention. Copyright © 2013 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  10. Race/Ethnicity, Education, and Treatment Parameters as Moderators and Predictors of Outcome in Binge Eating Disorder

    PubMed Central

    Thompson-Brenner, Heather; Franko, Debra L.; Thompson, Douglas R.; Grilo, Carlos M.; Boisseau, Christina L.; Roehrig, James P.; Richards, Lauren K.; Bryson, Susan W.; Bulik, Cynthia M.; Crow, Scott J.; Devlin, Michael J.; Gorin, Amy A.; Kristeller, Jean L.; Masheb, Robin; Mitchell, James E.; Peterson, Carol B.; Safer, Debra L.; Striegel, Ruth H.; Wilfley, Denise E.; Wilson, G. Terence

    2014-01-01

    Objective Binge eating disorder (BED) is prevalent among individuals from minority racial/ethnic groups and among individuals with lower levels of education, yet the efficacy of psychosocial treatments for these groups has not been examined in adequately powered analyses. This study investigated the relative variance in treatment retention and post-treatment symptom levels accounted for by demographic, clinical, and treatment variables as moderators and predictors of outcome. Method Data were aggregated from eleven randomized, controlled trials of psychosocial treatments for BED conducted at treatment sites across the United States. Participants were N = 1,073 individuals meeting criteria for BED including n = 946 Caucasian, n = 79 African American, and n = 48 Hispanic/Latino participants. Approximately 86% had some higher education; 85% were female. Multi-level regression analyses examined moderators and predictors of treatment retention, Eating Disorder Examination (EDE) global score, frequency of objective bulimic episodes (OBEs), and OBE remission. Results Moderator analyses of race/ethnicity and education were non-significant. Predictor analyses revealed African Americans were more likely to drop out of treatment than Caucasians, and lower level of education predicted greater post-treatment OBEs. African Americans showed a small but significantly greater reduction in EDE global score relative to Caucasians. Self-help treatment administered in a group showed negative outcomes relative to other treatment types, and longer treatment was associated with better outcome. Conclusions Observed lower treatment retention among African Americans and lesser treatment effects for individuals with lower levels of educational attainment are serious issues requiring attention. Reduced benefit was observed for shorter treatment length and self-help administered in groups. PMID:23647283

  11. Outcomes-Based Education Integration in Home Economics Program: An Evaluative Study

    ERIC Educational Resources Information Center

    Limon, Mark Raguindin; Castillo Vallente, John Paul

    2016-01-01

    This study examined the factors that affect the integration of Outcomes-Based Education (OBE) in the Home Economics (HE) education curriculum of the Technology and Livelihood Education (TLE) program of a State University in the northern part of the Philippines. Descriptive survey and qualitative design were deployed to gather, analyze, and…

  12. Skeletal muscle Sirt3 expression and mitochondrial respiration are regulated by a prenatal low protein diet

    USDA-ARS?s Scientific Manuscript database

    Malnutrition during the fetal growth period increases risk for later obesity and type 2 diabetes mellitus (T2DM). We have shown that a prenatal low protein (8% protein; LP) diet followed by postnatal high fat (45% fat; HF) diet results in offspring propensity for adipose tissue catch-up growth, obes...

  13. The Implementation of the New Lower Secondary Science Curriculum in Three Schools in Rwanda

    ERIC Educational Resources Information Center

    Nsengimana, Théophile; Ozawa, Hiroaki; Chikamori, Kensuke

    2014-01-01

    In 2006, Rwanda began implementing an Outcomes Based Education (OBE) lower secondary science curriculum that emphasises a student-centred approach. The new curriculum was designed to transform Rwandan society from an agricultural to a knowledge-based economy, with special attention to science and technology education. Up until this point in time…

  14. Outcome (Competency) Based Education: An Exploration of Its Origins, Theoretical Basis, and Empirical Evidence

    ERIC Educational Resources Information Center

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-01-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the…

  15. Physical Education, Sport and Recreation: A Triad Pedagogy of Hope

    ERIC Educational Resources Information Center

    van Deventer, K. J.

    2011-01-01

    Bloch (2009, 58), a previous advocate of Outcomes-based Education (OBE), states that "schooling in SA" is a national disaster. Quality holistic education that includes Physical Education (PE) and school sport should be the focal point of progress in developing countries. However, PE is worldwide in a political crisis and the situation is…

  16. Policy Enacted--Teachers' Approaches to an Outcome-Based Framework for Course Design

    ERIC Educational Resources Information Center

    Barman, Linda; Bolander-Laksov, Klara; Silén, Charlotte

    2014-01-01

    In this paper, we report on how teachers in Higher Education enact policy. Outcome-based education (OBE) serves as an example of a governmental educational policy introduced with the European Bologna reform. With a hermeneutic approach, we have studied how 14 teachers interpreted this policy and re-designed their courses. The findings show…

  17. Shaping the Culture of Schooling: The Rise of Outcome-Based Education. SUNY Series, Education and Culture: Critical Factors in the Formation of Character and Community in American Life.

    ERIC Educational Resources Information Center

    Desmond, Cheryl Taylor

    In Johnson City, New York, the schools have sustained positive, meaningful educational change since 1964. The Johnson City schools have also given birth to the national movement of Outcome-Based Education (OBE). This book provides a cultural history of the relationship between community and school in school reform. The book describes the…

  18. Tobephobia Experienced by Teachers in Secondary Schools: An Exploratory Study Focusing on Curriculum Reform in the Nelson Mandela Metropole

    ERIC Educational Resources Information Center

    Singh, P.

    2011-01-01

    Because of its history from apartheid to democracy, the aspiration to reform schools is a recurrent theme in South African education. Efforts to reform education in schools based on the outcomes-based education (OBE) curriculum approach created major challenges for policy makers in South Africa. The purpose of this exploratory research was…

  19. Outcome-Based Education and Student Learning in Managerial Accounting in Hong Kong

    ERIC Educational Resources Information Center

    Lui, Gladie; Shum, Connie

    2012-01-01

    Although Outcome-based Education has not been successful in public education in several countries, it has been successful in the medical fields in higher education in the U.S. The author implemented OBE in her Managerial Accounting course in H.K. Intended learning outcomes were mapped again Bloom's Cognitive Domain. Teaching and learning…

  20. Student Teachers' Views: What Is an Interesting Life Sciences Curriculum?

    ERIC Educational Resources Information Center

    de Villiers, Rian

    2011-01-01

    In South Africa, the Grade 12 "classes of 2008 and 2009" were the first to write examinations under the revised Life Sciences (Biology) curriculum which focuses on outcomes-based education (OBE). This paper presents an exploration of what students (as learners) considered to be difficult and interesting in Grades 10-12 Life Sciences…

  1. Prevalence of metilentetrahidrofolate reductase C677T polymorphism, consumption of vitamins B6, B9, B12 and determination of lipidic hydroperoxides in obese and normal weight Mexican population.

    PubMed

    Hernández-Guerrero, César; Romo-Palafox, Inés; Díaz-Gutiérrez, Mary Carmen; Iturbe-García, Mariana; Texcahua-Salazar, Alejandra; Pérez-Lizaur, Ana Bertha

    2013-11-01

    Oxidative stress is a key factor in the development of the principal comorbidities of obesity. Methylenetetrahydrofolate reductase enzyme (MTHFR) participates in the metabolism of folate with the action of vitamins B6 and B12. The gene of MTHFR may present a single nucleotide polymorphism (SNP) at position 677 (C677T), which can promote homocysteinemia associated to the production of free radicals. To determine the frequency of SNP C677T of the MTHFR, evaluate the consumption of vitamins B6, B9, B12 and determine the concentration of plasma lipid hydroperoxides (LOOH) in obese and control groups. 128 Mexican mestizo according to their body mass index were classified as normal weight (Nw; n=75) and obesity (ObeI-III; n=53). Identification of SNP C677T of MTHFR was performed by PCR-RFLP technic. The consumption of vitamins B6, B9 and B12 was assessed by a validate survey. LOOH was determined as an indicator of peripheral oxidative stress. There was no statistical difference in the frequency of the C677T polymorphism between the TT homozygous genotype in Nw (0.19) and ObeI-III (0.25). The frequency of T allele in Nw was 0.45 and 0.51 in ObI-III group. There were no statistical differences in the consumption of vitamins B6, B9 and B12 between Nw and ObI-III groups. The LOOH showed statistical difference (p < 0.05) between Nw and ObI–III group. Oxidative stress is present in all grades of obesity although there were no differences in the vitamin consumption and the SNP C677T between Nw and ObeI–III groups. Copyright AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.

  2. Linkages between Total Quality Management and the Outcomes-Based Approach in an Education Environment

    ERIC Educational Resources Information Center

    de Jager, H. J.; Nieuwenhuis, F. J.

    2005-01-01

    South Africa has embarked on a process of education renewal by adopting outcomes-based education (OBE). This paper focuses on the linkages between total quality management (TQM) and the outcomes-based approach in an education context. Quality assurance in academic programmes in higher education in South Africa is, in some instances, based on the…

  3. Sexuality Education in South Africa: Three Essential Questions

    ERIC Educational Resources Information Center

    Francis, Dennis A.

    2010-01-01

    Sex education is the cornerstone on which most HIV/AIDS prevention programmes rest and since the adoption of Outcomes-Based Education (OBE), has become a compulsory part of the South African school curriculum through the Life Orientation learning area. However, while much focus has been on providing young people with accurate and frank information…

  4. Implications of Outcomes-Based Education for Children with Disabilities. Synthesis Report 6.

    ERIC Educational Resources Information Center

    Thurlow, Martha L.

    This paper examines the concept of "outcomes-based education" (OBE), how it was developed, how it relates to other current reforms that encompass the notion of outcomes, and how it relates to students with disabilities in theory and in practice. Outcomes-based education holds that all children can learn and succeed and that schools are…

  5. Flight Test of the F/A-18 Active Aeroelastic Wing Airplane

    NASA Technical Reports Server (NTRS)

    Voracek, David

    2007-01-01

    A viewgraph presentation of flight tests performed on the F/A active aeroelastic wing airplane is shown. The topics include: 1) F/A-18 AAW Airplane; 2) F/A-18 AAW Control Surfaces; 3) Flight Test Background; 4) Roll Control Effectiveness Regions; 5) AAW Design Test Points; 6) AAW Phase I Test Maneuvers; 7) OBES Pitch Doublets; 8) OBES Roll Doublets; 9) AAW Aileron Flexibility; 10) Phase I - Lessons Learned; 11) Control Law Development and Verification & Validation Testing; 12) AAW Phase II RFCS Envelopes; 13) AAW 1-g Phase II Flight Test; 14) Region I - Subsonic 1-g Rolls; 15) Region I - Subsonic 1-g 360 Roll; 16) Region II - Supersonic 1-g Rolls; 17) Region II - Supersonic 1-g 360 Roll; 18) Region III - Subsonic 1-g Rolls; 19) Roll Axis HOS/LOS Comparison Region II - Supersonic (open-loop); 20) Roll Axis HOS/LOS Comparison Region II - Supersonic (closed-loop); 21) AAW Phase II Elevated-g Flight Test; 22) Region I - Subsonic 4-g RPO; and 23) Phase II - Lessons Learned

  6. Altus AFB, Oklahoma Revised Uniform Summary of Surface Weather Observations (RUSSWO). Parts A-F.

    DTIC Science & Technology

    1985-09-01

    4 _ 2; 4, InI Air Weather Service ( MAC) Aft 1 REVISED UNIFOCM SUMMAARY CW SC IL!8k 2 SURFACE WATHER OBE3RVATION$ 2b1l__ ALTUS m~F3 OK.MC 732 4 40 99...BRANCH PERCENIA6E FRECQUENCY OF OCCURRENCE OF CEILING VERSUS VISIBILIIV USAFTEAC FRON HOURLY OBSERVATIONS AIR WATHER SERVICE/HAC STATION NUMBER: 123520

  7. Implementation of learning outcome attainment measurement system in aviation engineering higher education

    NASA Astrophysics Data System (ADS)

    Salleh, I. Mohd; Mat Rani, M.

    2017-12-01

    This paper aims to discuss the effectiveness of the Learning Outcome Attainment Measurement System in assisting Outcome Based Education (OBE) for Aviation Engineering Higher Education in Malaysia. Direct assessments are discussed to show the implementation processes that become a key role in the successful outcome measurement system. A case study presented in this paper involves investigation on the implementation of the system in Aircraft Structure course for Bachelor in Aircraft Engineering Technology program in UniKL-MIAT. The data has been collected for five semesters, starting from July 2014 until July 2016. The study instruments used include the report generated in Learning Outcomes Measurements System (LOAMS) that contains information on the course learning outcomes (CLO) individual and course average performance reports. The report derived from LOAMS is analyzed and the data analysis has revealed that there is a positive significant correlation between the individual performance and the average performance reports. The results for analysis of variance has further revealed that there is a significant difference in OBE grade score among the report. Independent samples F-test results, on the other hand, indicate that the variances of the two populations are unequal.

  8. Microbial Fuel Cell Transformation of Recalcitrant Organic Compounds in Support of Biosensor Research

    DTIC Science & Technology

    2014-03-27

    simulant similar in structure to sarin (Obee and Satyapal, 1998). Literature on the biodegradation of DMMP is limited. In 2005, the DMMP Consortium...undergoes fermentation to acetate and hydrogen. Other 9 substrates, such as such sugars, may ferment to ethanol first. Current production occurs from...the ARB utilization of the fermentation product acetate, but electrons are lost in the form of hydrogen to methanogenesis. Therefore, the current

  9. Combined Induction of Rubber-Hand Illusion and Out-of-Body Experiences

    PubMed Central

    Olivé, Isadora; Berthoz, Alain

    2012-01-01

    The emergence of self-consciousness depends on several processes: those of body ownership, attributing self-identity to the body, and those of self-location, localizing our sense of self. Studies of phenomena like the rubber-hand illusion (RHi) and out-of-body experience (OBE) investigate these processes, respectively for representations of a body-part and the full-body. It is supposed that RHi only target processes related to body-part representations, while OBE only relates to full-body representations. The fundamental question whether the body-part and the full-body illusions relate to each other is nevertheless insufficiently investigated. In search for a link between body-part and full-body illusions in the brain we developed a behavioral task combining adapted versions of the RHi and OBE. Furthermore, for the investigation of this putative link we investigated the role of sensory and motor cues. We established a spatial dissociation between visual and proprioceptive feedback of a hand perceived through virtual reality in rest or action. Two experimental measures were introduced: one for the body-part illusion, the proprioceptive drift of the perceived localization of the hand, and one for the full-body illusion, the shift in subjective-straight-ahead (SSA). In the rest and action conditions it was observed that the proprioceptive drift of the left hand and the shift in SSA toward the manipulation side are equivalent. The combined effect was dependent on the manipulation of the visual representation of body parts, rejecting any main or even modulatory role for relevant motor programs. Our study demonstrates for the first time that there is a systematic relationship between the body-part illusion and the full-body illusion, as shown by our measures. This suggests a link between the representations in the brain of a body-part and the full-body, and consequently a common mechanism underpinning both forms of ownership and self-location. PMID:22675312

  10. Hugh Alistair Reid OBE MD: investigation and treatment of snake bite.

    PubMed

    Hawgood, B J

    1998-03-01

    Alistair Reid was an outstanding clinician, epidemiologist and scientist. At the Penang General Hospital, Malaya, his careful observation of sea snake poisoning revealed that sea snake venoms were myotoxic in man leading to generalized rhabdomyolysis, and were not neurotoxic as observed in animals. In 1961, Reid founded and became the first Honorary Director of the Penang Institute of Snake and Venom Research. Effective treatment of sea snake poisoning required specific antivenom which was produced at the Commonwealth Serum Laboratories in Melbourne from Enhydrina schistosa venom supplied by the Institute. From the low frequency of envenoming following bites, Reid concluded that snakes on the defensive when biting man seldom injected much venom. He provided clinical guidelines to assess the degree of envenoming, and the correct dose of specific antivenom to be used in the treatment of snake bite in Malaya. Reid demonstrated that the non-clotting blood of patients bitten by the pit viper, Calloselasma rhodostoma [Ancistrodon rhodostoma] was due to venom-induced defibrination. From his clinical experience of these patients, Reid suggested that a defibrinating derivative of C. rhodostoma venom might have a useful role in the treatment of deep vein thrombosis. This led to Arvin (ancrod) being used clinically from 1968. After leaving Malaya in 1964, Alistair Reid joined the staff of the Liverpool School of Tropical Medicine, as Senior Lecturer. Enzyme-linked immunosorbent assay (ELISA) for detecting and quantifying snake venom and venom-antibody was developed at the Liverpool Venom Research Unit: this proved useful in the diagnosis of snake bite, in epidemiological studies of envenoming patterns, and in screening of antivenom potency. In 1977, Dr H. Alistair Reid became Head of the WHO Collaborative Centre for the Control of Antivenoms based at Liverpool.

  11. Integrating geographic information systems and remote sensing with spatial econometric and mixed logit models for environmental valuation

    NASA Astrophysics Data System (ADS)

    Wells, Aaron Raymond

    This research focuses on the Emory and Obed Watersheds in the Cumberland Plateau in Central Tennessee and the Lower Hatchie River Watershed in West Tennessee. A framework based on market and nonmarket valuation techniques was used to empirically estimate economic values for environmental amenities and negative externalities in these areas. The specific techniques employed include a variation of hedonic pricing and discrete choice conjoint analysis (i.e., choice modeling), in addition to geographic information systems (GIS) and remote sensing. Microeconomic models of agent behavior, including random utility theory and profit maximization, provide the principal theoretical foundation linking valuation techniques and econometric models. The generalized method of moments estimator for a first-order spatial autoregressive function and mixed logit models are the principal econometric methods applied within the framework. The dissertation is subdivided into three separate chapters written in a manuscript format. The first chapter provides the necessary theoretical and mathematical conditions that must be satisfied in order for a forest amenity enhancement program to be implemented. These conditions include utility, value, and profit maximization. The second chapter evaluates the effect of forest land cover and information about future land use change on respondent preferences and willingness to pay for alternative hypothetical forest amenity enhancement options. Land use change information and the amount of forest land cover significantly influenced respondent preferences, choices, and stated willingness to pay. Hicksian welfare estimates for proposed enhancement options ranged from 57.42 to 25.53, depending on the policy specification, information level, and econometric model. The third chapter presents economic values for negative externalities associated with channelization that affect the productivity and overall market value of forested wetlands. Results of robust

  12. An Implantable Neuroprosthetic Device to Normalize Bladder Function after SCI

    DTIC Science & Technology

    2014-12-01

    intermittent vagal block using an implantable medical device. Surgery for Obesity and Related Diseases 5, 224-230. 8. Frankenhaeuser B (1960...of vagal blockade to induce weight loss in morbid obesity . Obes Surg 2012;22:1771–1782. 15. Waataja JJ, Tweden KS, Honda CN. Effects of high-frequency...and Rosenblueth 1939; Rosenblueth and Reboul 1939). Recently this nerve block method has been applied to treat obesity (Camilleri et al. 2009; Wattaja

  13. Immunochemistry of Rat Lung Tumorigenesis

    DTIC Science & Technology

    1983-01-01

    in the autochthonous host ( Prehn , 1957; Klein et al., 1960). A large number of chemically induced tumors were shown to be antigenic (Baldwin, 1967... Prehn , 1962). Even tumors induced by physical means such as ultraviolet radiation possess neoantigens although their antigenicity is weak (Klein et al...Immune System, Raven Press, New York. Basoinbrio, M.A., and Prehn , R.T. (1972). Cancer Res. 32:2545-2550. Beck, B., and Obe, G. (1975). Humangenetik 29

  14. Effect of Internet-Based Guided Self-help vs Individual Face-to-Face Treatment on Full or Subsyndromal Binge Eating Disorder in Overweight or Obese Patients: The INTERBED Randomized Clinical Trial.

    PubMed

    de Zwaan, Martina; Herpertz, Stephan; Zipfel, Stephan; Svaldi, Jennifer; Friederich, Hans-Christoph; Schmidt, Frauke; Mayr, Andreas; Lam, Tony; Schade-Brittinger, Carmen; Hilbert, Anja

    2017-10-01

    Although cognitive behavioral therapy (CBT) represents the criterion standard for treatment of binge eating disorder (BED), most individuals do not have access to this specialized treatment. To evaluate the efficacy of internet-based guided self-help (GSH-I) compared with traditional, individual face-to-face CBT. The Internet and Binge Eating Disorder (INTERBED) study is a prospective, multicenter, randomized, noninferiority clinical trial (treatment duration, 4 months; follow-ups, 6 months and 1.5 years). A volunteer sample of 178 adult outpatients with full or subsyndromal BED were recruited from 7 university-based outpatient clinics from August 1, 2010, through December 31, 2011; final follow-up assessment was in April 2014. Data analysis was performed from November 30, 2014, to May 27, 2015. Participants received 20 individual face-to-face CBT sessions of 50 minutes each or sequentially completed 11 internet modules and had weekly email contacts. The primary outcome was the difference in the number of days with objective binge eating episodes (OBEs) during the previous 28 days between baseline and end of treatment. Secondary outcomes included OBEs at follow-ups, eating disorder and general psychopathologic findings, body mass index, and quality of life. A total of 586 patients were screened, 178 were randomized, and 169 had at least one postbaseline assessment and constituted the modified intention-to-treat analysis group (mean [SD] age, 43.2 [12.3] years; 148 [87.6%] female); the 1.5-year follow-up was available in 116 patients. The confirmatory analysis using the per-protocol sample (n = 153) failed to show noninferiority of GSH-I (adjusted effect, 1.47; 95% CI, -0.01 to 2.91; P = .05). Using the modified intention-to-treat sample, GSH-I was inferior to CBT in reducing OBE days at the end of treatment (adjusted effect, 1.63; 95% CI, 0.17-3.05; P = .03). Exploratory longitudinal analyses also showed the superiority of CBT over GSH-I by the 6-month

  15. Reflective mirrors: perspective-taking in autoscopic phenomena.

    PubMed

    Brugger, Peter

    2002-08-01

    ''Autoscopic phenomena refer to different illusory reduplications of one's own body and self. This article proposes a phenomenological differentiation of autoscopic reduplication into three distinct classes, i.e., autoscopic hallucinations, heautoscopy, and out-of-body experiences (OBEs). Published cases are analysed with special emphasis on the subject's point of view from which the reduplication is observed. In an autoscopic hallucination the observer's perspective is clearly body-centred, and the visual image of one's own body appears as a mirror reversal. Heautoscopy (i.e., the encounter with an alter ego or doppelgänger), is defined as a reduplication not only of bodily appearance, but also of aspects of one's psychological self. The observer's perspective may alternate between egocentric and ''alter-ego-centred''. As a consequence of the projection of bodily feelings into the doppelgänger (implying a mental rotation of one's own body along the vertical axis), original and reduplicated bodies are not mirror images of one another. This also holds for OBEs, where one's self is not reduplicated but appears to be completely dissociated from the body and observing it from a location in extracorporeal space. It is argued that perspective-taking in a spatial sense may be meaningfully related to perspective-taking in a psychological sense. The mirror in the autoscopic hallucination is a ''cognitively nonreflective mirror'' (Jean Cocteau), both spatially and psychologically. The reflective abilities of the heautoscopic mirror are better developed, yet frequent shifts in the observer's spatial perspective render the nature of psychological interactions between self and alter ego highly unpredictable. The doppelgänger may serve a transitivistic (i.e., own suffering is transferred to the alter ego) or aggressive function when this behaviour is directed against a patient. The mirror in an OBE is always reflective: It allows the self to view both space and one

  16. Medical Surveillance Monthly Report

    DTIC Science & Technology

    2016-10-01

    women aged 30–70 years suffering from OSA.2 The prevalence of OSA has been ris- ing and is associated with changing obe- sity prevalence, as obesity is...information on the burden of disease in military subpopulations and the associa- tion of obesity with OSA. M E T H O D S The surveillance period for this...of the Defense Manpower Data Center), marital status, and obesity sta- tus. To calculate obese person-time dur- ing the surveillance period

  17. Relativistic extended Thomas-Fermi calculations with exchange term contributions

    NASA Astrophysics Data System (ADS)

    Haddad, S.; Weigel, M. K.

    1994-10-01

    In this investigation we present self-consistent relativistic extended Thomas-Fermi (ETF) and extended Thomas-Fermi-Fock (ETFF) approaches, derived from the semiclassical treatment of the relativistic nuclear Hartree-Fock problem. The approximations are used to describe the ground-state properties of finite nuclei. The resulting equations are solved numerically for several one-boson-exchange (OBE) lagrangians. The results are discussed and compared with the outcome of full quantal Hartree and Hartree-Fock calculations, other semiclassical treatments and experimental data.

  18. Long-term infusions of ghrelin and obestatin in early lactation dairy cows.

    PubMed

    Roche, J R; Sheahan, A J; Chagas, L M; Blache, D; Berry, D P; Kay, J K

    2008-12-01

    Ghrelin is an endogenous ligand of the growth hormone secretagogue receptor and a potential orexigenic agent in monogastrics and ruminants. Obestatin has been reported to have the opposite (anorexigenic) effect. Fifty one multiparous cows were randomly allocated to 1 of 3 groups (n = 17): a control group and 2 groups with cows continuously infused with 0.74 mumol/d of ghrelin (GHR group) or obestatin (OBE group) subcutaneously. Infusions began 21 d in milk, and treatments continued for 8 wk. Generalized linear models were used to determine the treatment effect on average daily and cumulative milk production and composition, and plasma ghrelin, growth hormone, insulin-like growth factor (IGF)-1, leptin, nonesterified fatty acids, and glucose. Mixed models, with cow included as a repeated effect, were used to determine if treatment effects differed by week postcalving for milk production, body weight, and body condition score (BCS; scale 1 to 10). Parity, breed, week of the year at calving, treatment, week postcalving, and the 2 wk preexperimental average of each measure (covariate) were included as fixed effects. Treatment did not affect dry matter intake. Cows infused with GHR lost more BCS (-0.71 units) over the 8-wk study period than the control (-0.23 BCS units) cows, and on average were thinner than cows in either of the other 2 treatments (0.2 BCS units). Consistent with the extra BCS loss in GHR cows, plasma IGF-1, glucose, and leptin concentrations were reduced and plasma nonesterified fatty acid concentrations were greater in GHR cows. Despite a numerical tendency for GHR cows to produce more milk (1,779 kg) than control (1,681 kg) or OBE (1,714 kg) cows during the 8-wk period, milk production differences were not statistically different. However, the timing of the numerical separation of the lactation curves coincided with the significant changes in BCS, IGF-1, and leptin. Results indicate a positive effect of ghrelin infusion on lipolysis. Further

  19. Summaries of FY 1994 geosciences research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-12-01

    The Geosciences Research Program is directed by the Department of Energy`s (DOE`s) Office of Energy Research (OER) through its Office of Basic Energy Sciences (OBES). Activities in the Geosciences Research Program are directed toward the long-term fundamental knowledge of the processes that transport, modify, concentrate, and emplace (1) the energy and mineral resources of the earth and (2) the energy byproducts of man. The Program is divided into five broad categories: Geophysics and earth dynamics; Geochemistry; Energy resource recognition, evaluation, and utilization; Hydrogeology and exogeochemistry; and Solar-terrestrial interactions. The summaries in this document, prepared by the investigators, describe the scopemore » of the individual programs in these main areas and their subdivisions including earth dynamics, properties of earth materials, rock mechanics, underground imaging, rock-fluid interactions, continental scientific drilling, geochemical transport, solar/atmospheric physics, and modeling, with emphasis on the interdisciplinary areas.« less

  20. A Simple Technique for Jejunojejunal Revision in Laparoscopic Roux-en-Y Gastric Bypass.

    PubMed

    Spivak, Hadar

    2015-12-01

    The lengths of the bypassed segments in the initial laparoscopic roux-en-Y gastric bypass (LRYGB) are usually a matter of the individual surgeon's routine. The literature is inconclusive about the association between the Roux limbs' length and weight-loss or malabsorption (Stefanidis et al. Obes Surg. 21(1):119-24, 2011); (Rawlins et al. Surg Obes Relat Dis. 7(1):45-9, 2011). However, jejunojejunal anastomosis (JJ) "redo" and Roux limb length revision could be considered for patients with a very short Roux limb and weight loss failure or for short common channel and malabsorption. Complications of JJ may also require revision. In over 1000 LRYGBs since 2001, eight patients required JJ revision for failure to lose enough weight (n = 6), malabsorption (n = 1), and stricture (n = 1). Instead of completely taking down the JJ, a simple technique was evolved to keep the enteric limb continuity. In a following step, the biliopancreatic limbs have been transected from the JJ and reconnected proximal (for malabsorption) or distal (for weight loss failure). In this video, a step-by-step the laparoscopic technique for JJ revision and relocating the biliopancreatic limb is presented. Procedure takes 40-60 min to perform using four trocars and the hospital stay was 1-2 nights. No complications occurred during the procedures or postoperative period. Laparoscopic revision of JJ is feasible and safe and should be part of surgeons' options on the long-term management of patients post LRYGB.

  1. A Mixing Theory for the Interaction between Dissipative Flows and Nearly-Isentropic Streams

    DTIC Science & Technology

    1952-01-15

    mTVEfOBEE? "- - AEROmUT-IG-AL ENGINEERING i^ORAjPORY i January 15j t--:l A- • *-,- ß» -*- •AbiMÖWEEEGEMSHT - a "I- " Es A- major portion...incident oblique shock, airfoil -chord -_- _ ’ - . - - •airf oil thickness at: -trailing edge •t v:il -:na Es : EeynöiJidG number,. &;^e...von Karman momentum integral for the. dissipatiVe flow region,, where.;, however> this internal flow is treated, as quäai- Que ."uiU4euBi!oncUt

  2. Death and consciousness--an overview of the mental and cognitive experience of death.

    PubMed

    Parnia, Sam

    2014-11-01

    Advances in resuscitation science have indicated that, contrary to perception, death by cardiorespiratory criteria can no longer be considered a specific moment but rather a potentially reversible process that occurs after any severe illness or accident causes the heart, lungs, and brain to stop functioning. The resultant loss of vital signs of life (and life processes) is used to declare a specific time of death by physicians globally. When medical attempts are made to reverse this process, it is commonly referred to as cardiac arrest; however, when these attempts do not succeed or when attempts are not made, it is called death by cardiorespiratory criteria. Thus, biologically speaking, cardiac arrest and death by cardiorespiratory criteria are synonymous. While resuscitation science has provided novel opportunities to reverse death by cardiorespiratory criteria and treat the potentially devastating consequences of the resultant postresuscitation syndrome, it has also inadvertently provided intriguing insights into the likely mental and cognitive experience of death. Recollections reported by millions of people in relation to death, so-called out-of-body experiences (OBEs) or near-death experiences (NDEs), are often-discussed phenomena that are frequently considered hallucinatory or illusory in nature; however, objective studies on these experiences are limited. To date, many consistent themes corresponding to the likely experience of death have emerged, and studies have indicated that the scientifically imprecise terms of NDE and OBE may not be sufficient to describe the actual experience of death. While much remains to be discovered, the recalled experience surrounding death merits a genuine scientific investigation without prejudice. © 2014 New York Academy of Sciences.

  3. Isospin flip as a relativistic effect: NN interactions

    NASA Technical Reports Server (NTRS)

    Buck, W. W.

    1993-01-01

    Results are presented of an analytic relativistic calculation of a OBE nucleon-nucleon (NN) interaction employing the Gross equation. The calculation consists of a non-relativistic reduction that keeps the negative energy states. The result is compared to purely non-relativistic OBEP results and the relativistic effects are separated out. One finds that the resulting relativistic effects are expressable as a power series in (tau(sub 1))(tau(sub 2)) that agrees, qualitatively, with NN scattering. Upon G-parity transforming this NN potential, one obtains, qualitatively, a short range NN spectroscopy in which the S-states are the lowest states.

  4. Coordinated Research Program in Pulsed Power Physics.

    DTIC Science & Technology

    1985-12-20

    8217). Stale. and ZIP Code) 10 SOURCE OF FUNDING NOS. PROGRAM PROJECT TASK WORK UNIT ELE ME NT NO. NO. NO. No. 11.?ILE.ic.ecufC~sjf~aton 1 c 61102F 2301 A7 12...SYMBOLI lncludr Arra Code) 5" Major B. Smith j202/767-4908 AFOSR/NP FORM 1473. E3 APR EDITION OF I..AN 73 IS OBeCLETE Unclassified SEC A17 C! ww...fields at localized points in pulsed power systems*. In addition, as in previous years, new projects will be added as new ideas are generated. Funds for

  5. The presence of maladaptive eating behaviors after bariatric surgery in a cross sectional study: importance of picking or nibbling on weight regain.

    PubMed

    Conceição, Eva; Mitchell, James E; Vaz, Ana R; Bastos, Ana P; Ramalho, Sofia; Silva, Cátia; Cao, Li; Brandão, Isabel; Machado, Paulo P P

    2014-12-01

    Maladaptive eating behaviors after bariatric surgery are thought to compromise weight outcomes, but little is known about their frequency over time. This study investigates the presence of subjective binge eating (SBE), objective binge eating (OBE) and picking and nibbling (P&N) before surgery and at different time periods postoperative, and their association with weight outcomes. This cross-sectional study assessed a group of patients before surgery (n=61), and three post-operative groups: 1) 90 patients (27 with laparoscopic adjustable gastric band (LAGB) and 63 with Laparoscopic Roux-en-Y Gastric Bypass (LRYGB)) assessed during their 6month follow-up medical appointment; 2) 96 patients (34 LAGB and 62 LRYGB) assessed during their one year follow-up medical appointment; and 3) 127 patients (62 LAGB and 55 LRYGB) assessed during their second year follow-up medical appointment. Assessment included the Eating Disorders Examination and a set of self-report measures. In the first ten months after surgery fewer participants reported maladaptive eating behaviors. No OBEs were reported at 6months. SBE episodes were present in all groups. P&N was the most frequently reported eating behavior. Eating behavior (P&N) was significantly associated with weight regain, and non-behavioral variables were associated with weight loss. This study is cross-sectional study which greatly limits the interpretation of outcomes and no causal association can be made. However, a subgroup of postoperative patients report eating behaviors that are associated with greater weight regain. The early detection of these eating behaviors might be important in the prevention of problematic outcomes after bariatric surgery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Software Engineering Education.

    DTIC Science & Technology

    1987-05-01

    A-D-AI82bN3 JO U4WR El 14N NWRC j MCA! CIo~~l-MN~J02Ua~ 1/1 NYSI/ EI- -TR- SWT -8 -L U NCLA SSI F IED PAYF/G L2/5 MI. 1.0I2 W 136 2’ UN,, - mll I m...tghteeight aurriuls . empae ta t i ohe An additional refinement of the curriculum content can bem aterial taught m ight also be taught in co urses w hose a...descriptions of possible courses. Bloo -. [Bioom56] has defined a taxonomy of educational Software System Clsae. Several different classes can obe

  7. F-18 High Alpha Research Vehicle (HARV) parameter identification flight test maneuvers for optimal input design validation and lateral control effectiveness

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1995-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.

  8. Deuteron Compton scattering below pion photoproduction threshold

    NASA Astrophysics Data System (ADS)

    Levchuk, M. I.; L'vov, A. I.

    2000-07-01

    Deuteron Compton scattering below pion photoproduction threshold is considered in the framework of the nonrelativistic diagrammatic approach with the Bonn OBE potential. A complete gauge-invariant set of diagrams is taken into account which includes resonance diagrams without and with NN-rescattering and diagrams with one- and two-body seagulls. The seagull operators are analyzed in detail, and their relations with free- and bound-nucleon polarizabilities are discussed. It is found that both dipole and higher-order polarizabilities of the nucleon are required for a quantitative description of recent experimental data. An estimate of the isospin-averaged dipole electromagnetic polarizabilities of the nucleon and the polarizabilities of the neutron is obtained from the data.

  9. Society News: Queen honours Fellows; The Society and legacies; Thesis prizes; Lectures on laptops; Stonehenge story

    NASA Astrophysics Data System (ADS)

    2007-08-01

    The Queen's Birthday Honours list announced on 16 June contained some familiar names from astronomy. Prof. Mark Bailey (1) of Armagh Observatory, currently a Vice-President of the RAS, was awarded an MBE and Dr Heather Couper (2), former President of the British Astronomical Association, a CBE. Prof. Nigel Mason (3) of the Open University and inaugural Director of the Milton Keynes Science Festival received an OBE. Prof. Jocelyn Bell-Burnell (4), President of the RAS from 2002-2004, was awarded a DBE - and an Honorary Doctorate from Harvard University. In addition, Prof. Lord Rees (5), Astronomer Royal, president of the Royal Society and President of the RAS from 1992-1994, was appointed to the Order of Merit.

  10. Continental Scientific Drilling Program Data Base

    NASA Astrophysics Data System (ADS)

    Pawloski, Gayle

    The Continental Scientific Drilling Program (CSDP) data base at Lawrence Livermore National Laboratory is a central repository, cataloguing information from United States drill holes. Most holes have been drilled or proposed by various federal agencies. Some holes have been commercially funded. This data base is funded by the Office of Basic Energy Sciences of t he Department of Energy (OBES/DOE) to serve the entire scientific community. Through the unrestricted use of the database, it is possible to reduce drilling costs and maximize the scientific value of current and planned efforts of federal agencies and industry by offering the opportunity for add-on experiments and supplementing knowledge with additional information from existing drill holes.

  11. Linking Outdoor Recreation and Economic Development: A Feasibility Assessment of the Obed Wild and Scenic River, Tennessee

    Treesearch

    Charles B. Sims; Donald G. Hodges; Del Scruggs

    2004-01-01

    Rural economies in many parts of the United States have undergone significant changes over the past two decades. Faltering economies historically based on traditional economic sectors like agriculture and manufacturing are transitioning to retail and service sectors to support growth. One example of such an industry is resource-based recreation and tourism. Tourists...

  12. Development of DSRC device and communication system performance measures recommendations for DSRC OBE performance and security requirements.

    DOT National Transportation Integrated Search

    2016-05-22

    This report presents recommendations for minimum DSRC device communication performance and security requirements to ensure effective operation of the DSRC system. The team identified recommended DSRC communications requirements aligned to use cases, ...

  13. Geoffrey Layton Slack OBE (Mil), CBE, TD, BDS DDS, FDSRCS, FDS Glas, FFDRCSI, Dip Bact (1912-1991).

    PubMed

    Gelbier, Stanley

    2014-02-01

    It is with some pride that the author worked in Geoffrey Slack's department from 1963 to 1967 and even retained a working relationship with him after that time. Slack was Professor of Dental Surgery (1959-1976) and later Professor of Community Dental Health (1976-1977) at The London Hospital Medical College, within the University of London. The change in titles came about as a result of recognition of his contribution to developments in public health and community dental care and services, for many of which he was directly responsible. He was Dental Dean from 1965 until 1969. Upon retirement in 1977 he became Emeritus Professor. In addition, he was Dean of the Faculty of Dental Surgery at the Royal College of Surgeons of England from 1974 to 1977.

  14. Two-dimensional over-all neutronics analysis of the ITER device

    NASA Astrophysics Data System (ADS)

    Zimin, S.; Takatsu, Hideyuki; Mori, Seiji; Seki, Yasushi; Satoh, Satoshi; Tada, Eisuke; Maki, Koichi

    1993-07-01

    The present work attempts to carry out a comprehensive neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) developed during the Conceptual Design Activities (CDA). The two-dimensional cylindrical over-all calculational models of ITER CDA device including the first wall, blanket, shield, vacuum vessel, magnets, cryostat and support structures were developed for this purpose with a help of the DOGII code. Two dimensional DOT 3.5 code with the FUSION-40 nuclear data library was employed for transport calculations of neutron and gamma ray fluxes, tritium breeding ratio (TBR), and nuclear heating in reactor components. The induced activity calculational code CINAC was employed for the calculations of exposure dose rate after reactor shutdown around the ITER CDA device. The two-dimensional over-all calculational model includes the design specifics such as the pebble bed Li2O/Be layered blanket, the thin double wall vacuum vessel, the concrete cryostat integrated with the over-all ITER design, the top maintenance shield plug, the additional ring biological shield placed under the top cryostat lid around the above-mentioned top maintenance shield plug etc. All the above-mentioned design specifics were included in the employed calculational models. Some alternative design options, such as the water-rich shielding blanket instead of lithium-bearing one, the additional biological shield plug at the top zone between the poloidal field (PF) coil No. 5, and the maintenance shield plug, were calculated as well. Much efforts have been focused on analyses of obtained results. These analyses aimed to obtain necessary recommendations on improving the ITER CDA design.

  15. Three cases of near death experience: Is it physiology, physics or philosophy?

    PubMed

    Purkayastha, Moushumi; Mukherjee, Kanchan Kumar

    2012-07-01

    Near-Death experience (NDE) following a severe head injury, critical illness, coma, and suicidal attempt has been reported. Purpose of study was to examine why a few patients report NDE after survival, do cultural and socio-demographic factors may play a role? The details of 3 cases of patients who reported near-death experience (NDE), is presented here. Several theories regarding the reasons, of the various components of the experiences, are discussed with a brief review of literature. All the three patients report the out of body experience OBE. All the three patients reported to remember initially the events that took place during this time, but after some time all three patients could not recall exactly the events that had happened. Whether these are only hallucinations or a proof of 'after life' will remain debatable until more data is communicated.

  16. Changing times, similar challenges.

    PubMed

    Baillie, Jonathan

    2013-11-01

    With IHEEM celebrating its 70th Anniversary this month, HEJ editor, Jonathan Baillie, recently met the Institute's oldest surviving Past-President, Lawrence Turner OBE, who, having in 1964 established a small engineering business producing some of the NHS's earliest nurse call systems from the basement of his three-storey West Midlands home, has since seen the company, Static Systems Group, grow to become one of the U.K. market-leaders in its field. The Institute's President from 1979-1981, he looked back, during a fascinating two-hour discussion, at his time in the role, talked through some of the key technological and other changes he has seen in the past five decades, reflected on an interesting and varied career, and considered some of the very different current-day challenges that today's IHEEM President, and the Institute as a whole, face.

  17. Determination of stability and control derivatives from the NASA F/A-18 HARV from flight data using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1995-01-01

    This report is a compilation of PID (Proportional Integral Derivative) results for both longitudinal and lateral directional analysis that was completed during Fall 1994. It had earlier established that the maneuvers available for PID containing independent control surface inputs from OBES were not well suited for extracting the cross-coupling static (i.e., C(sub N beta)) or dynamic (i.e., C(sub Npf)) derivatives. This was due to the fact that these maneuvers were designed with the goal of minimizing any lateral directional motion during longitudinal maneuvers and vice-versa. This allows for greater simplification in the aerodynamic model as far as coupling between longitudinal and lateral directions is concerned. As a result, efforts were made to reanalyze this data and extract static and dynamic derivatives for the F/A-18 HARV (High Angle of Attack Research Vehicle) without the inclusion of the cross-coupling terms such that more accurate estimates of classical model terms could be acquired. Four longitudinal flights containing static PID maneuvers were examined. The classical state equations already available in pEst for alphadot, qdot and thetadot were used. Three lateral directional flights of PID static maneuvers were also examined. The classical state equations already available in pEst for betadot, p dot, rdot and phi dot were used. Enclosed with this document are the full set of longitudinal and lateral directional parameter estimate plots showing coefficient estimates along with Cramer-Rao bounds. In addition, a representative time history match for each type of meneuver tested at each angle of attack is also enclosed.

  18. [Epilepsy and psychic seizures].

    PubMed

    Fukao, Kenjiro

    2006-01-01

    Various psychic symptoms as ictal manifestation have been found in epileptic patients. They are classified as psychic seizures within simple partial seizures, and subclassified into affective, cognitive, dysmnesic seizures and so on, although the subclassification is not yet satisfactory and almost nothing is known about their relationships with normal brain functions. In this presentation, the speaker picked ictal fear, déjà vu and out-of-body experience (OBE) from them and suggested that studies on these symptoms could uniquely contribute to the progress of cognitive neuroscience, presenting some results from the research and case study that he had been engaged in. Psychic seizures are prone to be missed or misdiagnosed unless psychiatrists with sufficient knowledge and experience on epilepsy care would not treat them, because they are subjective symptoms that are diverse and subtle, while they have some characteristics as ictal symptoms.

  19. The second me: Seeing the real body during humanoid robot embodiment produces an illusion of bi-location.

    PubMed

    Aymerich-Franch, Laura; Petit, Damien; Ganesh, Gowrishankar; Kheddar, Abderrahmane

    2016-11-01

    Whole-body embodiment studies have shown that synchronized multi-sensory cues can trick a healthy human mind to perceive self-location outside the bodily borders, producing an illusion that resembles an out-of-body experience (OBE). But can a healthy mind also perceive the sense of self in more than one body at the same time? To answer this question, we created a novel artificial reduplication of one's body using a humanoid robot embodiment system. We first enabled individuals to embody the humanoid robot by providing them with audio-visual feedback and control of the robot head movements and walk, and then explored the self-location and self-identification perceived by them when they observed themselves through the embodied robot. Our results reveal that, when individuals are exposed to the humanoid body reduplication, they experience an illusion that strongly resembles heautoscopy, suggesting that a healthy human mind is able to bi-locate in two different bodies simultaneously. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Indirect double photoionization of water

    NASA Astrophysics Data System (ADS)

    Resccigno, T. N.; Sann, H.; Orel, A. E.; Dörner, R.

    2011-05-01

    The vertical double ionization thresholds of small molecules generally lie above the dissociation limits corresponding to formation of two singly charged fragments. This gives the possibility of populating singly charged molecular ions by photoionization in the Franck-Condon region at energies below the lowest dication state, but above the dissociation limit into two singly charged fragment ions. This process can produce a superexcited neutral fragment that autoionizes at large internuclear separation. We study this process in water, where absorption of a photon produces an inner-shell excited state of H2O+ that fragments to H++OH*. The angular distribution of secondary electrons produced by OH* when it autoionizes produces a characteristic asymmetric pattern that reveals the distance, and therefore the time, at which the decay takes place. LBNL, Berkeley, CA, J. W. Goethe Universität, Frankfurt, Germany. Work performed under auspices of US DOE and supported by OBES, Div. of Chemical Sciences.

  1. R and D Evaluation Workshop report, U.S. Department of Energy, Office of Energy Research, September 7--8, 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, G.

    1995-10-30

    The objective of the workshop was to promote discussions between experts and research managers on developing approaches for assessing the impact of DOE`s basic energy research upon the energy mission, applied research, technology transfer, the economy, and society. The purpose of this impact assessment is to demonstrate results and improve ER research programs in this era when basic research is expected to meet changing national economic and social goals. The questions addressed were: (1) By what criteria and metrics does Energy Research measure performance and evaluate its impact on the DOE mission and society while maintaining an environment that fostersmore » basic research? (2) What combination of evaluation methods best applies to assessing the performance and impact of OBES basic research? The focus will be upon the following methods: Case studies, User surveys, Citation analysis, TRACES approach, Return on DOE investment (ROI)/Econometrics, and Expert panels. (3) What combination of methods and specific rules of thumb can be applied to capture impacts along the spectrum from basic research to products and societal impacts?« less

  2. Our unacknowledged ancestors: dream theorists of antiquity, the middle ages, and the renaissance.

    PubMed

    Rupprecht, C S

    1990-06-01

    Exploring the dream world from a modern, or post-modern, perspective, especially through the lens of contemporary technologies, often leads us as researchers to see ourselves as engaged in a new and revolutionary discourse. In fact, this self-image is a profoundly ahistorical one, because it ignores the contributions of ancient, medieval and Renaissance oneirologists who wrote extensively, albeit in different terms and images of lucidity, prerecognition, day residue, wish fulfillment, incubation, problem solving, REM, obe, and the collective unconscious. There are also analogues in these early accounts to anxiety, recurrent, mirror, telepathic, shared, flying, and death dreams. Dream interpretation through music, analysis of dream as narrative, sophisticated theories about memory and language and symbolization are all part of the tradition. Further, early texts pose many issues in sleep and dream research which are not currently being pursued. We dream workers of the late twentieth century should therefore fortify ourselves with knowledge of the oneiric past as one important way to enhance our dream work in the twenty-first century.

  3. Summaries of FY 1996 geosciences research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-12-01

    The Geosciences Research Program is directed by the Department of Energy`s (DOE`s) Office of Energy Research (OER) through its Office of Basic Energy Sciences (OBES). Activities in the Geosciences Research Program are directed toward building the long-term fundamental knowledge base necessary to provide for energy technologies of the future. Future energy technologies and their individual roles in satisfying the nations energy needs cannot be easily predicted. It is clear, however, that these future energy technologies will involve consumption of energy and mineral resources and generation of technological wastes. The earth is a source for energy and mineral resources and ismore » also the host for wastes generated by technological enterprise. Viable energy technologies for the future must contribute to a national energy enterprise that is efficient, economical, and environmentally sound. The Geosciences Research Program emphasizes research leading to fundamental knowledge of the processes that transport, modify, concentrate, and emplace (1) the energy and mineral resources of the earth and (2) the energy by-products of man.« less

  4. A Tale of Two ObesCities: The Role of Municipal Governance in Reducing Childhood Obesity in New York City and London

    PubMed Central

    Libman, Kimberly; O’Keefe, Eileen

    2010-01-01

    As rates of childhood obesity and overweight rise around the world, researchers and policy makers seek new ways to reverse these trends. Given the concentration of the world’s population, income inequalities, unhealthy diets, and patterns of physical activity in cities, urban areas bear a disproportionate burden of obesity. To address these issues, in 2008, researchers from the City University of New York and London Metropolitan University created the Municipal Responses to Childhood Obesity Collaborative. The Collaborative examined three questions: What role has city government played in responding to childhood obesity in each jurisdiction? How have municipal governance structures in each city influenced its capacity to respond effectively? How can policy and programmatic interventions to reduce childhood obesity also reduce the growing socioeconomic and racial/ethnic inequities in its prevalence? Based on a review of existing initiatives in London and New York City, the Collaborative recommended 11 broad strategies by which each city could reduce childhood obesity. These recommendations were selected because they can be enacted at the municipal level; will reduce socioeconomic and racial/ethnic inequalities in obesity; are either well supported by research or are already being implemented in one city, demonstrating their feasibility; build on existing city assets; and are both green and healthy. PMID:20811951

  5. Data management with a landslide inventory of the Franconian Alb (Germany) using a spatial database and GIS tools

    NASA Astrophysics Data System (ADS)

    Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit

    2014-05-01

    ), informations on location, landslide types and causes, geomorphological positions, geometries, hazards and damages, as well as assessments related to the activity of landslides. Furthermore, there are stored spatial objects, which represent the components of a landslide, in particular the scarps and the accumulation areas. Besides, waterways, map sheets, contour lines, detailed infrastructure data, digital elevation models, aspect and slope data are included. Examples of spatial queries to the database are intersections of raster and vector data for calculating values for slope gradients or aspects of landslide areas and for creating multiple, overlaying sections for the comparison of slopes, as well as distances to the infrastructure or to the next receiving drainage. Furthermore, getting informations on landslide magnitudes, distribution and clustering, as well as potential correlations concerning geomorphological or geological conditions. The data management concept in this study can be implemented for any academic, public or private use, because it is independent from any obligatory licenses. The created spatial database offers a platform for interdisciplinary research and socio-economic questions, as well as for landslide susceptibility and hazard indication mapping. Obe, R.O., Hsu, L.S. 2011. PostGIS in action. - pp 492, Manning Publications, Stamford

  6. Heritability of the somatotype components in Biscay families.

    PubMed

    Rebato, E; Jelenkovic, A; Salces, I

    2007-01-01

    The anthropometric somatotype is a quantitative description of body shape and composition. Familial studies indicate the existence of a familial resemblance for this phenotype and they suggest a substantial action by genetic factors on this aggregation. The aim of this study is to examine the degree of familial resemblance of the somatotype components and of a factor of shape, in a sample of Biscay nuclear families (Basque Country, Spain). One thousand three hundred and thirty nuclear families were analysed. The anthropometric somatotype components [Carter, J.E.L., Heath, B.H., 1990. Somatotyping. Development and applications. Cambridge University Press, Cambridge, p. 503] were computed. Each component was fitted for the other two through a stepwise multiple regression, and also fitted through the LMS method [Cole, T., 1988. Fitting smoothed centile curves to reference data. J. Roy. Stat. Soc. 151, 385-418] in order to eliminate the age, sex and generation effects. The three raw components were introduced in a PCA from which a shape factor (PC1) was extracted for each generation. The correlations analysis was performed with the SEGPATH package [Province, M.A., Rao, D.C., 1995. General purpose model and computer programme for combined segregation and path analysis (SEGPATH): automatically creating computer from symbolic language model specifications. Genet. Epidemiol. 12, 203-219]. A general model of transmission and nine reduced models were tested. Maximal heritability was estimated with the formula of [Rice, T., Warwick, D.E., Gagnon, J., Bouchard, C., Leon, A.S., Skinner, J.S., Wilmore, J.H., Rao, D.C., 1997. Familial resemblance for body composition measures: the HERITAGE family study. Obes. Res. 5, 557-562]. The correlations were higher between offspring than in parents and offspring and a significant resemblance between mating partners existed. Maximum heritabilities were 55%, 52% and 46% for endomorphy, mesomorphy and ectomorphy, respectively, and 52% for PC1

  7. Low temperature specific heat of frustrated antiferromagnet HoInCu4

    NASA Astrophysics Data System (ADS)

    Weickert, Franziska; Fritsch, Veronika; Bambaugh, Ryan; Sarrao, John; Thompson, Joe D.; Movshovich, Roman

    2014-03-01

    We present low temperature specific heat measurements of single crystal HoInCu4, down to 35 mK and in magnetic field up to 12 Tesla. Ho atoms are arranged in an FCC lattice of the edge-sharing tetrahedra, and undergo an antiferromagnetic ordering at TN = 0.76 K, with the frustration parameter f = -ΘCW /TN of 14.3. Magnetic AF order is suppressed in field H0 ~ 4 T. The low temperature Schottky anomaly due to Ho evolves smoothly as a function of field through H0 and TN. The peak value of the anomaly remains roughly constant from 0 T to 12 T. The temperature of the anomaly's peak remains constant at TSch ~ 170 mK for HOBES, MSE division.

  8. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.

  9. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.

  10. Development of technical skills in Electrical Power Engineering students: A case study of Power Electronics as a Key Course

    NASA Astrophysics Data System (ADS)

    Hussain, I. S.; Azlee Hamid, Fazrena

    2017-08-01

    Technical skills are one of the attributes, an engineering student must attain by the time of graduation, as per recommended by Engineering Accreditation Council (EAC). This paper describes the development of technical skills, Programme Outcome (PO) number 5, in students taking the Bachelor of Electrical Power Engineering (BEPE) programme in Universiti Tenaga Nasional (UNITEN). Seven courses are identified to address the technical skills development. The course outcomes (CO) of the courses are designed to instill the relevant technical skills with suitable laboratory activities. Formative and summative assessments are carried out to gauge students’ acquisition of the skills. Finally, to measure the attainment of the technical skills, key course concept is used. The concept has been implemented since 2013, focusing on improvement of the programme instead of the cohort. From the PO attainment analysis method, three different levels of PO attainment can be calculated: from the programme level, down to the course and student levels. In this paper, the attainment of the courses mapped to PO5 is measured. It is shown that Power Electronics course, which is the key course for PO5, has a strong attainment at above 90%. PO5 of other six courses are also achieved. As a conclusion, by embracing outcome-based education (OBE), the BEPE programme has a sound method to develop technical psychomotor skills in the degree students.

  11. Near-death experiences in cardiac arrest survivors.

    PubMed

    French, Christopher C

    2005-01-01

    Near-death experiences (NDEs) have become the focus of much interest in the last 30 years or so. Such experiences can occur both when individuals are objectively near to death and also when they simply believe themselves to be. The experience typically involves a number of different components including a feeling of peace and well-being, out-of-body experiences (OBEs), entering a region of darkness, seeing a brilliant light, and entering another realm. NDEs are known to have long-lasting transformational effects upon those who experience them. An overview is presented of the various theoretical approaches that have been adopted in attempts to account for the NDE. Spiritual theories assume that consciousness can become detached from the neural substrate of the brain and that the NDE may provide a glimpse of an afterlife. Psychological theories include the proposal that the NDE is a dissociative defense mechanism that occurs in times of extreme danger or, less plausibly, that the NDE reflects memories of being born. Finally, a wide range of organic theories of the NDE has been put forward including those based upon cerebral hypoxia, anoxia, and hypercarbia; endorphins and other neurotransmitters; and abnormal activity in the temporal lobes. Finally, the results of studies of NDEs in cardiac arrest survivors are reviewed and the implications of these results for our understanding of mind-brain relationships are discussed.

  12. Laser Induced Fluorescence Spectroscopy of Jet-Cooled CaOCa

    NASA Astrophysics Data System (ADS)

    Sullivan, Michael N.; Frohman, Daniel J.; Heaven, Michael; Fawzy, Wafaa M.

    2016-06-01

    The group IIA metals have stable hypermetallic oxides of the general form MOM. Theoretical interest in these species is associated with the multi-reference character of the ground states. It is now established that the ground states can be formally assigned to the M+O^{2-M+} configuration, which leaves two electrons in orbitals that are primarily metal-centered ns orbitals. Hence the MOM species are diradicals with very small energy spacings between the lowest energy singlet and triplet states. Previously, we have characterized the lowest energy singlet transition (1Σ^{+u← X1Σ+g}) of BeOBe. In this study we obtained the first electronic spectrum of CaOCa. Jet-cooled laser induced fluorescence spectra were recorded for multiple bands that occured within the 14,800 - 15,900 cm-1 region. Most of the bands exhibited simple P/R branch rotational line patterns that were blue-shaded. Only even rotational levels were observed, consistent with the expected X 1Σ^{+g} symmetry of the ground state (40Ca has zero nuclear spin). A progression of excited bending modes was evident in the spectrum, indicating that the transition is to an upper state that has a bent equilibrium geometry. Molecular constants were extracted from the rovibronic bands using PGOPHER. The experimental results and interpretation of the spectrum, which was guided by the predictions of electronic structure calculation, will be presented.

  13. Laser Induced Fluorescence Spectroscopy of Jet-Cooled MgOMg

    NASA Astrophysics Data System (ADS)

    Sullivan, Michael N.; Frohman, Daniel J.; Heaven, Michael; Fawzy, Wafaa M.

    2017-06-01

    The group IIA metals have stable hypermetallic oxides of the general form MOM. Theoretical interest in these species is associated with the multi-reference character of the ground states. It is now established that the ground states can be formally assigned to the M^{+O^{2-}M^{+}} configuration, which leaves two electrons in orbitals that are primarily metal-centered ns orbitals. Hence the MOM species are diradicals with very small energy spacings between the lowest energy singlet and triplet states. Previously, we have characterized the lowest energy singlet transition (^{1Σ^{+}_{u}← ^{1}Σ^{+}_{g}}) of BeOBe. Preliminary data for the first electronic transition of the isovalent species, CaOCa, was presented previously (71^{st} ISMS, talk RI10). We now report the first electronic spectrum of MgOMg. Jet-cooled laser induced fluorescence spectra were recorded for multiple bands that occurred within the 21,000 - 24,000 cm^{-1} range. Most of the bands exhibited simple P/R branch rotational line patterns that were blue-shaded. Only even rotational levels were observed, consistent with the expected X ^{1Σ^{+}_{g}} symmetry of the ground state (^{24Mg} has zero nuclear spin). Molecular constants were extracted from the rovibronic bands using PGOPHER. The experimental results and interpretation of the spectrum, which was guided by the predictions of electronic structure calculation, will be presented.

  14. Models and role models.

    PubMed

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. 2015 S. Karger AG, Basel

  15. Expert models and modeling processes associated with a computer-modeling tool

    NASA Astrophysics Data System (ADS)

    Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.

    2006-07-01

    Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.

  16. 10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  17. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  18. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  19. Global Carbon Cycle Modeling in GISS ModelE2 GCM

    NASA Astrophysics Data System (ADS)

    Aleinov, I. D.; Kiang, N. Y.; Romanou, A.; Romanski, J.

    2014-12-01

    Consistent and accurate modeling of the Global Carbon Cycle remains one of the main challenges for the Earth System Models. NASA Goddard Institute for Space Studies (GISS) ModelE2 General Circulation Model (GCM) was recently equipped with a complete Global Carbon Cycle algorithm, consisting of three integrated components: Ent Terrestrial Biosphere Model (Ent TBM), Ocean Biogeochemistry Module and atmospheric CO2 tracer. Ent TBM provides CO2 fluxes from the land surface to the atmosphere. Its biophysics utilizes the well-known photosynthesis functions of Farqhuar, von Caemmerer, and Berry and Farqhuar and von Caemmerer, and stomatal conductance of Ball and Berry. Its phenology is based on temperature, drought, and radiation fluxes, and growth is controlled via allocation of carbon from labile carbohydrate reserve storage to different plant components. Soil biogeochemistry is based on the Carnegie-Ames-Stanford (CASA) model of Potter et al. Ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. Atmospheric CO2 is advected with a quadratic upstream algorithm implemented in atmospheric part of ModelE2. Here we present the results for pre-industrial equilibrium and modern transient simulations and provide comparison to available observations. We also discuss the process of validation and tuning of particular algorithms used in the model.

  20. Target Scattering Metrics: Model-Model and Model-Data Comparisons

    DTIC Science & Technology

    2017-12-13

    measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for

  1. Target Scattering Metrics: Model-Model and Model Data comparisons

    DTIC Science & Technology

    2017-12-13

    measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for

  2. Comparative Protein Structure Modeling Using MODELLER

    PubMed Central

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406

  3. [Bone remodeling and modeling/mini-modeling.

    PubMed

    Hasegawa, Tomoka; Amizuka, Norio

    Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.

  4. Vector models and generalized SYK models

    DOE PAGES

    Peng, Cheng

    2017-05-23

    Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  5. Beyond body experiences: phantom limbs, pain and the locus of sensation.

    PubMed

    Wade, Nicholas J

    2009-02-01

    the grip of stimulus-based theories of perception. The pattern of development in theories of phantom limbs might provide a model for examining out-of-body experiences (OBEs).

  6. Comparative Protein Structure Modeling Using MODELLER.

    PubMed

    Webb, Benjamin; Sali, Andrej

    2014-09-08

    Functional characterization of a protein sequence is one of the most frequent problems in biology. This task is usually facilitated by accurate three-dimensional (3-D) structure of the studied protein. In the absence of an experimentally determined structure, comparative or homology modeling can sometimes provide a useful 3-D model for a protein that is related to at least one known protein structure. Comparative modeling predicts the 3-D structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. Copyright © 2014 John Wiley & Sons, Inc.

  7. Geologic Framework Model Analysis Model Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompassmore » the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models

  8. Pre-Modeling Ensures Accurate Solid Models

    ERIC Educational Resources Information Center

    Gow, George

    2010-01-01

    Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…

  9. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  10. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    PubMed

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  11. Translating building information modeling to building energy modeling using model view definition.

    PubMed

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  12. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    PubMed Central

    Kim, Jong Bum; Clayton, Mark J.; Haberl, Jeff S.

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process. PMID:25309954

  13. Models Archive and ModelWeb at NSSDC

    NASA Astrophysics Data System (ADS)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  14. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  15. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    PubMed

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  16. Integrity modelling of tropospheric delay models

    NASA Astrophysics Data System (ADS)

    Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó

    2017-04-01

    The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual

  17. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  18. Modeling complexes of modeled proteins.

    PubMed

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Metabolic network modeling with model organisms.

    PubMed

    Yilmaz, L Safak; Walhout, Albertha Jm

    2017-02-01

    Flux balance analysis (FBA) with genome-scale metabolic network models (GSMNM) allows systems level predictions of metabolism in a variety of organisms. Different types of predictions with different accuracy levels can be made depending on the applied experimental constraints ranging from measurement of exchange fluxes to the integration of gene expression data. Metabolic network modeling with model organisms has pioneered method development in this field. In addition, model organism GSMNMs are useful for basic understanding of metabolism, and in the case of animal models, for the study of metabolic human diseases. Here, we discuss GSMNMs of most highly used model organisms with the emphasis on recent reconstructions. Published by Elsevier Ltd.

  20. Metabolic network modeling with model organisms

    PubMed Central

    Yilmaz, L. Safak; Walhout, Albertha J.M.

    2017-01-01

    Flux balance analysis (FBA) with genome-scale metabolic network models (GSMNM) allows systems level predictions of metabolism in a variety of organisms. Different types of predictions with different accuracy levels can be made depending on the applied experimental constraints ranging from measurement of exchange fluxes to the integration of gene expression data. Metabolic network modeling with model organisms has pioneered method development in this field. In addition, model organism GSMNMs are useful for basic understanding of metabolism, and in the case of animal models, for the study of metabolic human diseases. Here, we discuss GSMNMs of most highly used model organisms with the emphasis on recent reconstructions. PMID:28088694

  1. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  2. EasyModeller: A graphical interface to MODELLER

    PubMed Central

    2010-01-01

    Background MODELLER is a program for automated protein Homology Modeling. It is one of the most widely used tool for homology or comparative modeling of protein three-dimensional structures, but most users find it a bit difficult to start with MODELLER as it is command line based and requires knowledge of basic Python scripting to use it efficiently. Findings The study was designed with an aim to develop of "EasyModeller" tool as a frontend graphical interface to MODELLER using Perl/Tk, which can be used as a standalone tool in windows platform with MODELLER and Python preinstalled. It helps inexperienced users to perform modeling, assessment, visualization, and optimization of protein models in a simple and straightforward way. Conclusion EasyModeller provides a graphical straight forward interface and functions as a stand-alone tool which can be used in a standard personal computer with Microsoft Windows as the operating system. PMID:20712861

  3. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  4. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  5. Radiation Environment Modeling for Spacecraft Design: New Model Developments

    NASA Technical Reports Server (NTRS)

    Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray

    2006-01-01

    A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.

  6. Leadership Models.

    ERIC Educational Resources Information Center

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  7. Building mental models by dissecting physical models.

    PubMed

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to ensure focused learning; models that are too constrained require less supervision, but can be constructed mechanically, with little to no conceptual engagement. We propose "model-dissection" as an alternative to "model-building," whereby instructors could make efficient use of supervisory resources, while simultaneously promoting focused learning. We report empirical results from a study conducted with biology undergraduate students, where we demonstrate that asking them to "dissect" out specific conceptual structures from an already built 3D physical model leads to a significant improvement in performance than asking them to build the 3D model from simpler components. Using questionnaires to measure understanding both before and after model-based interventions for two cohorts of students, we find that both the "builders" and the "dissectors" improve in the post-test, but it is the latter group who show statistically significant improvement. These results, in addition to the intrinsic time-efficiency of "model dissection," suggest that it could be a valuable pedagogical tool. © 2015 The International Union of Biochemistry and Molecular Biology.

  8. Better models are more effectively connected models

    NASA Astrophysics Data System (ADS)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  9. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  10. SUMMA and Model Mimicry: Understanding Differences Among Land Models

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.

    2016-12-01

    Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in

  11. Modeller's attitude in catchment modelling: a comparative study

    NASA Astrophysics Data System (ADS)

    Battista Chirico, Giovanni

    2010-05-01

    Ten modellers have been invited to predict, independently from each other, the discharge of the artificial Chicken Creek catchment in North-East Germany for simulation period of three years, providing them only soil texture, terrain and meteorological data. No data concerning the discharge or other sources of state variables and fluxes within the catchment have been provided. Modellers had however the opportunity to visit the experimental catchment and inspect areal photos of the catchments since its initial development stage. This study has been a unique comparative study focussing on how different modellers deal with the key issues in predicting the discharge in ungauged catchments: 1) choice of the model structure; 2) identification of model parameters; 3) identification of model initial and boundary conditions. The first general lesson learned during this study was that the modeller is just part of the entire modelling process and has a major bearing on the model results, particularly in ungauged catchments where there are more degrees of freedom in making modelling decisions. Modellers' attitudes during the stages of the model implementation and parameterisation have been deeply influenced by their own experience from previous modelling studies. A common outcome was that modellers have been mainly oriented to apply process-based models able to exploit the available data concerning the physical properties of the catchment and therefore could be more suitable to cope with the lack of data concerning state variables or fluxes. The second general lesson learned during this study was the role of dominant processes. We believed that the modelling task would have been much easier in an artificial catchment, where heterogeneity were expected to be negligible and processes simpler, than in catchments that have evolved over a longer time period. The results of the models were expected to converge, and this would have been a good starting point to proceed for a model

  12. Model selection for logistic regression models

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2012-09-01

    Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

  13. Building Mental Models by Dissecting Physical Models

    ERIC Educational Resources Information Center

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…

  14. Multiscale musculoskeletal modelling, data–model fusion and electromyography-informed modelling

    PubMed Central

    Zhang, J.; Heidlauf, T.; Sartori, M.; Besier, T.; Röhrle, O.; Lloyd, D.

    2016-01-01

    This paper proposes methods and technologies that advance the state of the art for modelling the musculoskeletal system across the spatial and temporal scales; and storing these using efficient ontologies and tools. We present population-based modelling as an efficient method to rapidly generate individual morphology from only a few measurements and to learn from the ever-increasing supply of imaging data available. We present multiscale methods for continuum muscle and bone models; and efficient mechanostatistical methods, both continuum and particle-based, to bridge the scales. Finally, we examine both the importance that muscles play in bone remodelling stimuli and the latest muscle force prediction methods that use electromyography-assisted modelling techniques to compute musculoskeletal forces that best reflect the underlying neuromuscular activity. Our proposal is that, in order to have a clinically relevant virtual physiological human, (i) bone and muscle mechanics must be considered together; (ii) models should be trained on population data to permit rapid generation and use underlying principal modes that describe both muscle patterns and morphology; and (iii) these tools need to be available in an open-source repository so that the scientific community may use, personalize and contribute to the database of models. PMID:27051510

  15. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  16. Modelling, teachers' views on the nature of modelling, and implications for the education of modellers

    NASA Astrophysics Data System (ADS)

    Justi, Rosária S.; Gilbert, John K.

    2002-04-01

    In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.

  17. NARSTO critical review of photochemical models and modeling

    NASA Astrophysics Data System (ADS)

    Russell, Armistead; Dennis, Robin

    Photochemical air quality models play a central role in both schentific investigation of how pollutants evlove in the atmosphere as well as developing policies to manage air quality. In the past 30 years, these models have evolved from rather crude representations of the physics and chemistry impacting trace species to their current state: comprehensive, but not complete. The evolution has included advancements in not only the level of process descriptions, but also the computational implementation, including numerical methods. As part of the NARSTO Critical Reviews, this article discusses the current strengths and weaknesses of air quality models and the modeling process. Current Eulerian models are found to represent well the primary processes impacting the evolution of trace species in most cases though some exceptions may exist. For example, sub-grid-scale processes, such as concentrated power plant plumes, are treated only approximately. It is not apparent how much such approximations affect their results and the polices based upon those results. A significant weakness has been in how investigators have addressed, and communicated, such uncertainties. Studies find that major uncertainties are due to model inputs, e.g., emissions and meteorology, more so than the model itself. One of the primary weakness identified is in the modeling process, not the models. Evaluation has been limited both due to data constraints. Seldom is there ample observational data to conduct a detailed model intercomparison using consistent data (e.g., the same emissions and meteorology). Further model advancement, and development of greater confidence in the use of models, is hampered by the lack of thorough evaluation and intercomparisons. Model advances are seen in the use of new tools for extending the interpretation of model results, e.g., process and sensitivity analysis, modeling systems to facilitate their use, and extension of model capabilities, e.g., aerosol dynamics

  18. Coupling Climate Models and Forward-Looking Economic Models

    NASA Astrophysics Data System (ADS)

    Judd, K.; Brock, W. A.

    2010-12-01

    Authors: Dr. Kenneth L. Judd, Hoover Institution, and Prof. William A. Brock, University of Wisconsin Current climate models range from General Circulation Models (GCM’s) with millions of degrees of freedom to models with few degrees of freedom. Simple Energy Balance Climate Models (EBCM’s) help us understand the dynamics of GCM’s. The same is true in economics with Computable General Equilibrium Models (CGE’s) where some models are infinite-dimensional multidimensional differential equations but some are simple models. Nordhaus (2007, 2010) couples a simple EBCM with a simple economic model. One- and two- dimensional ECBM’s do better at approximating damages across the globe and positive and negative feedbacks from anthroprogenic forcing (North etal. (1981), Wu and North (2007)). A proper coupling of climate and economic systems is crucial for arriving at effective policies. Brock and Xepapadeas (2010) have used Fourier/Legendre based expansions to study the shape of socially optimal carbon taxes over time at the planetary level in the face of damages caused by polar ice cap melt (as discussed by Oppenheimer, 2005) but in only a “one dimensional” EBCM. Economists have used orthogonal polynomial expansions to solve dynamic, forward-looking economic models (Judd, 1992, 1998). This presentation will couple EBCM climate models with basic forward-looking economic models, and examine the effectiveness and scaling properties of alternative solution methods. We will use a two dimensional EBCM model on the sphere (Wu and North, 2007) and a multicountry, multisector regional model of the economic system. Our aim will be to gain insights into intertemporal shape of the optimal carbon tax schedule, and its impact on global food production, as modeled by Golub and Hertel (2009). We will initially have limited computing resources and will need to focus on highly aggregated models. However, this will be more complex than existing models with forward

  19. EIA model documentation: Petroleum Market Model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-30

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models (Public Law 94-385, section 57.b.2). The PMM models petroleum refining activities, the marketing of products, the production of natural gas liquids and domestic methanol, projects petroleum provides and sources of supplies for meeting demand. In addition, the PMMmore » estimates domestic refinery capacity expansion and fuel consumption.« less

  20. Advances in Geoscience Modeling: Smart Modeling Frameworks, Self-Describing Models and the Role of Standardized Metadata

    NASA Astrophysics Data System (ADS)

    Peckham, Scott

    2016-04-01

    Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders

  1. Modeling Methods

    USGS Publications Warehouse

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  2. Models for Models: An Introduction to Polymer Models Employing Simple Analogies

    NASA Astrophysics Data System (ADS)

    Tarazona, M. Pilar; Saiz, Enrique

    1998-11-01

    An introduction to the most common models used in the calculations of conformational properties of polymers, ranging from the freely jointed chain approximation to Monte Carlo or molecular dynamics methods, is presented. Mathematical formalism is avoided and simple analogies, such as human chains, gases, opinion polls, or marketing strategies, are used to explain the different models presented. A second goal of the paper is to teach students how models required for the interpretation of a system can be elaborated, starting with the simplest model and introducing successive improvements until the refinements become so sophisticated that it is much better to use an alternative approach.

  3. Hybrid Model of IRT and Latent Class Models.

    ERIC Educational Resources Information Center

    Yamamoto, Kentaro

    This study developed a hybrid of item response theory (IRT) models and latent class models, which combined the strengths of each type of model. The primary motivation for developing the new model is to describe characteristics of examinees' knowledge at the time of the examination. Hence, the application of the model lies mainly in so-called…

  4. New 3D model for dynamics modeling

    NASA Astrophysics Data System (ADS)

    Perez, Alain

    1994-05-01

    The wrist articulation represents one of the most complex mechanical systems of the human body. It is composed of eight bones rolling and sliding along their surface and along the faces of the five metacarpals of the hand and the two bones of the arm. The wrist dynamics are however fundamental for the hand movement, but it is so complex that it still remains incompletely explored. This work is a part of a new concept of computer-assisted surgery, which consists in developing computer models to perfect surgery acts by predicting their consequences. The modeling of the wrist dynamics are based first on the static model of its bones in three dimensions. This 3D model must optimise the collision detection procedure which is the necessary step to estimate the physical contact constraints. As many other possible computer vision models do not fit with enough precision to this problem, a new 3D model has been developed thanks to the median axis of the digital distance map of the bones reconstructed volume. The collision detection procedure is then simplified for contacts are detected between spheres. The experiment of this original 3D dynamic model products realistic computer animation images of solids in contact. It is now necessary to detect ligaments on digital medical images and to model them in order to complete a wrist model.

  5. Bayesian model evidence as a model evaluation metric

    NASA Astrophysics Data System (ADS)

    Guthke, Anneli; Höge, Marvin; Nowak, Wolfgang

    2017-04-01

    When building environmental systems models, we are typically confronted with the questions of how to choose an appropriate model (i.e., which processes to include or neglect) and how to measure its quality. Various metrics have been proposed that shall guide the modeller towards a most robust and realistic representation of the system under study. Criteria for evaluation often address aspects of accuracy (absence of bias) or of precision (absence of unnecessary variance) and need to be combined in a meaningful way in order to address the inherent bias-variance dilemma. We suggest using Bayesian model evidence (BME) as a model evaluation metric that implicitly performs a tradeoff between bias and variance. BME is typically associated with model weights in the context of Bayesian model averaging (BMA). However, it can also be seen as a model evaluation metric in a single-model context or in model comparison. It combines a measure for goodness of fit with a penalty for unjustifiable complexity. Unjustifiable refers to the fact that the appropriate level of model complexity is limited by the amount of information available for calibration. Derived in a Bayesian context, BME naturally accounts for measurement errors in the calibration data as well as for input and parameter uncertainty. BME is therefore perfectly suitable to assess model quality under uncertainty. We will explain in detail and with schematic illustrations what BME measures, i.e. how complexity is defined in the Bayesian setting and how this complexity is balanced with goodness of fit. We will further discuss how BME compares to other model evaluation metrics that address accuracy and precision such as the predictive logscore or other model selection criteria such as the AIC, BIC or KIC. Although computationally more expensive than other metrics or criteria, BME represents an appealing alternative because it provides a global measure of model quality. Even if not applicable to each and every case, we aim

  6. Integrative structure modeling with the Integrative Modeling Platform.

    PubMed

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  7. Modeling volatility using state space models.

    PubMed

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).

  8. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  9. Downscaling GISS ModelE Boreal Summer Climate over Africa

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  10. An online model composition tool for system biology models

    PubMed Central

    2013-01-01

    Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914

  11. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  12. Mineralogic Model (MM3.0) Analysis Model Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Lum

    2002-02-12

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing),more » and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M&O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M&O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components

  13. Simplified subsurface modelling: data assimilation and violated model assumptions

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated

  14. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  15. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  16. TUNS/TCIS information model/process model

    NASA Technical Reports Server (NTRS)

    Wilson, James

    1992-01-01

    An Information Model is comprised of graphical and textual notation suitable for describing and defining the problem domain - in our case, TUNS or TCIS. The model focuses on the real world under study. It identifies what is in the problem and organizes the data into a formal structure for documentation and communication purposes. The Information Model is composed of an Entity Relationship Diagram (ERD) and a Data Dictionary component. The combination of these components provide an easy to understand methodology for expressing the entities in the problem space, the relationships between entities and the characteristics (attributes) of the entities. This approach is the first step in information system development. The Information Model identifies the complete set of data elements processed by TUNS. This representation provides a conceptual view of TUNS from the perspective of entities, data, and relationships. The Information Model reflects the business practices and real-world entities that users must deal with.

  17. Modelling human skull growth: a validated computational model

    PubMed Central

    Marghoub, Arsalan; Johnson, David; Khonsari, Roman H.; Fagan, Michael J.; Moazen, Mehran

    2017-01-01

    During the first year of life, the brain grows rapidly and the neurocranium increases to about 65% of its adult size. Our understanding of the relationship between the biomechanical forces, especially from the growing brain, the craniofacial soft tissue structures and the individual bone plates of the skull vault is still limited. This basic knowledge could help in the future planning of craniofacial surgical operations. The aim of this study was to develop a validated computational model of skull growth, based on the finite-element (FE) method, to help understand the biomechanics of skull growth. To do this, a two-step validation study was carried out. First, an in vitro physical three-dimensional printed model and an in silico FE model were created from the same micro-CT scan of an infant skull and loaded with forces from the growing brain from zero to two months of age. The results from the in vitro model validated the FE model before it was further developed to expand from 0 to 12 months of age. This second FE model was compared directly with in vivo clinical CT scans of infants without craniofacial conditions (n = 56). The various models were compared in terms of predicted skull width, length and circumference, while the overall shape was quantified using three-dimensional distance plots. Statistical analysis yielded no significant differences between the male skull models. All size measurements from the FE model versus the in vitro physical model were within 5%, with one exception showing a 7.6% difference. The FE model and in vivo data also correlated well, with the largest percentage difference in size being 8.3%. Overall, the FE model results matched well with both the in vitro and in vivo data. With further development and model refinement, this modelling method could be used to assist in preoperative planning of craniofacial surgery procedures and could help to reduce reoperation rates. PMID:28566514

  18. Modelling human skull growth: a validated computational model.

    PubMed

    Libby, Joseph; Marghoub, Arsalan; Johnson, David; Khonsari, Roman H; Fagan, Michael J; Moazen, Mehran

    2017-05-01

    During the first year of life, the brain grows rapidly and the neurocranium increases to about 65% of its adult size. Our understanding of the relationship between the biomechanical forces, especially from the growing brain, the craniofacial soft tissue structures and the individual bone plates of the skull vault is still limited. This basic knowledge could help in the future planning of craniofacial surgical operations. The aim of this study was to develop a validated computational model of skull growth, based on the finite-element (FE) method, to help understand the biomechanics of skull growth. To do this, a two-step validation study was carried out. First, an in vitro physical three-dimensional printed model and an in silico FE model were created from the same micro-CT scan of an infant skull and loaded with forces from the growing brain from zero to two months of age. The results from the in vitro model validated the FE model before it was further developed to expand from 0 to 12 months of age. This second FE model was compared directly with in vivo clinical CT scans of infants without craniofacial conditions ( n = 56). The various models were compared in terms of predicted skull width, length and circumference, while the overall shape was quantified using three-dimensional distance plots. Statistical analysis yielded no significant differences between the male skull models. All size measurements from the FE model versus the in vitro physical model were within 5%, with one exception showing a 7.6% difference. The FE model and in vivo data also correlated well, with the largest percentage difference in size being 8.3%. Overall, the FE model results matched well with both the in vitro and in vivo data. With further development and model refinement, this modelling method could be used to assist in preoperative planning of craniofacial surgery procedures and could help to reduce reoperation rates. © 2017 The Author(s).

  19. A Model-Model and Data-Model Comparison for the Early Eocene Hydrological Cycle

    NASA Technical Reports Server (NTRS)

    Carmichael, Matthew J.; Lunt, Daniel J.; Huber, Matthew; Heinemann, Malte; Kiehl, Jeffrey; LeGrande, Allegra; Loptson, Claire A.; Roberts, Chris D.; Sagoo, Navjit; Shields, Christine

    2016-01-01

    A range of proxy observations have recently provided constraints on how Earth's hydrological cycle responded to early Eocene climatic changes. However, comparisons of proxy data to general circulation model (GCM) simulated hydrology are limited and inter-model variability remains poorly characterised. In this work, we undertake an intercomparison of GCM-derived precipitation and P - E distributions within the extended EoMIP ensemble (Eocene Modelling Intercomparison Project; Lunt et al., 2012), which includes previously published early Eocene simulations performed using five GCMs differing in boundary conditions, model structure, and precipitation-relevant parameterisation schemes. We show that an intensified hydrological cycle, manifested in enhanced global precipitation and evaporation rates, is simulated for all Eocene simulations relative to the preindustrial conditions. This is primarily due to elevated atmospheric paleo-CO2, resulting in elevated temperatures, although the effects of differences in paleogeography and ice sheets are also important in some models. For a given CO2 level, globally averaged precipitation rates vary widely between models, largely arising from different simulated surface air temperatures. Models with a similar global sensitivity of precipitation rate to temperature (dP=dT ) display different regional precipitation responses for a given temperature change. Regions that are particularly sensitive to model choice include the South Pacific, tropical Africa, and the Peri-Tethys, which may represent targets for future proxy acquisition. A comparison of early and middle Eocene leaf-fossil-derived precipitation estimates with the GCM output illustrates that GCMs generally underestimate precipitation rates at high latitudes, although a possible seasonal bias of the proxies cannot be excluded. Models which warm these regions, either via elevated CO2 or by varying poorly constrained model parameter values, are most successful in simulating a

  20. A Distributed Snow Evolution Modeling System (SnowModel)

    NASA Astrophysics Data System (ADS)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  1. Implementing multiresolution models and families of models: from entity-level simulation to desktop stochastic models and "repro" models

    NASA Astrophysics Data System (ADS)

    McEver, Jimmie; Davis, Paul K.; Bigelow, James H.

    2000-06-01

    We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.

  2. The Relationships Between Modelling and Argumentation from the Perspective of the Model of Modelling Diagram

    NASA Astrophysics Data System (ADS)

    Cardoso Mendonça, Paula Cristina; Justi, Rosária

    2013-09-01

    Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities were planned from the transposition of the main modelling stages that constitute the 'Model of Modelling Diagram' so that students could experience each of such stages. All the lessons were video recorded and their transcriptions supported the elaboration of case studies for each group of students. From the analysis of the case studies, we identified argumentative situations when students performed all of the modelling stages. Our data show that the argumentative situations were related to sense making, articulating and persuasion purposes, and were closely related to the generation of explanations in the modelling processes. They also show that representations are important resources for argumentation. Our results are consistent with some of those already reported in the literature regarding the relationship between modelling and argumentation, but are also divergent when they show that argumentation is not only related to the model evaluation phase.

  3. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    NASA Astrophysics Data System (ADS)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  4. `Models of' versus `Models for'. Toward an Agent-Based Conception of Modeling in the Science Classroom

    NASA Astrophysics Data System (ADS)

    Gouvea, Julia; Passmore, Cynthia

    2017-03-01

    The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.

  5. A Model for Math Modeling

    ERIC Educational Resources Information Center

    Lin, Tony; Erfan, Sasan

    2016-01-01

    Mathematical modeling is an open-ended research subject where no definite answers exist for any problem. Math modeling enables thinking outside the box to connect different fields of studies together including statistics, algebra, calculus, matrices, programming and scientific writing. As an integral part of society, it is the foundation for many…

  6. Model fit evaluation in multilevel structural equation models

    PubMed Central

    Ryu, Ehri

    2014-01-01

    Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882

  7. Using the Model Coupling Toolkit to couple earth system models

    USGS Publications Warehouse

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  8. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    PubMed

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  9. The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

    NASA Astrophysics Data System (ADS)

    Plotnitsky, Arkady

    2017-06-01

    The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The

  10. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  11. Climate Models

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.

    2012-01-01

    Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.

  12. Predicting category intuitiveness with the rational model, the simplicity model, and the generalized context model.

    PubMed

    Pothos, Emmanuel M; Bailey, Todd M

    2009-07-01

    Naïve observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported models of supervised categorization, the generalized context model (GCM). Considering different category assignments for a set of instances, the authors asked how well the GCM can predict the classification of each instance on the basis of all the other instances. The category assignment that results in the smallest prediction error is interpreted as the most intuitive for the GCM-the authors refer to this way of applying the GCM as "unsupervised GCM." The authors systematically compared predictions of category intuitiveness from the unsupervised GCM and two models of unsupervised categorization: the simplicity model and the rational model. The unsupervised GCM compared favorably with the simplicity model and the rational model. This success of the unsupervised GCM illustrates that the distinction between supervised and unsupervised categorization may need to be reconsidered. However, no model emerged as clearly superior, indicating that there is more work to be done in understanding and modeling category intuitiveness.

  13. Personalized Modeling for Prediction with Decision-Path Models

    PubMed Central

    Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.

    2015-01-01

    Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570

  14. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  15. Modelling MIZ dynamics in a global model

    NASA Astrophysics Data System (ADS)

    Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto

    2016-04-01

    Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.

  16. Variable selection and model choice in geoadditive regression models.

    PubMed

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  17. Modeling arson - An exercise in qualitative model building

    NASA Technical Reports Server (NTRS)

    Heineke, J. M.

    1975-01-01

    A detailed example is given of the role of von Neumann and Morgenstern's 1944 'expected utility theorem' (in the theory of games and economic behavior) in qualitative model building. Specifically, an arsonist's decision as to the amount of time to allocate to arson and related activities is modeled, and the responsiveness of this time allocation to changes in various policy parameters is examined. Both the activity modeled and the method of presentation are intended to provide an introduction to the scope and power of the expected utility theorem in modeling situations of 'choice under uncertainty'. The robustness of such a model is shown to vary inversely with the number of preference restrictions used in the analysis. The fewer the restrictions, the wider is the class of agents to which the model is applicable, and accordingly more confidence is put in the derived results. A methodological discussion on modeling human behavior is included.

  18. Air Quality Dispersion Modeling - Alternative Models

    EPA Pesticide Factsheets

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  19. Model Organisms and Traditional Chinese Medicine Syndrome Models

    PubMed Central

    Xu, Jin-Wen

    2013-01-01

    Traditional Chinese medicine (TCM) is an ancient medical system with a unique cultural background. Nowadays, more and more Western countries due to its therapeutic efficacy are accepting it. However, safety and clear pharmacological action mechanisms of TCM are still uncertain. Due to the potential application of TCM in healthcare, it is necessary to construct a scientific evaluation system with TCM characteristics and benchmark the difference from the standard of Western medicine. Model organisms have played an important role in the understanding of basic biological processes. It is easier to be studied in certain research aspects and to obtain the information of other species. Despite the controversy over suitable syndrome animal model under TCM theoretical guide, it is unquestionable that many model organisms should be used in the studies of TCM modernization, which will bring modern scientific standards into mysterious ancient Chinese medicine. In this review, we aim to summarize the utilization of model organisms in the construction of TCM syndrome model and highlight the relevance of modern medicine with TCM syndrome animal model. It will serve as the foundation for further research of model organisms and for its application in TCM syndrome model. PMID:24381636

  20. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  1. Examination of simplified travel demand model. [Internal volume forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.L. Jr.; McFarlane, W.J.

    1978-01-01

    A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less

  2. Development of Ensemble Model Based Water Demand Forecasting Model

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop

    2014-05-01

    In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)

  3. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  4. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical

  5. Particle Tracking Model (PTM) with Coastal Modeling System (CMS)

    DTIC Science & Technology

    2015-11-04

    Coastal Inlets Research Program Particle Tracking Model (PTM) with Coastal Modeling System ( CMS ) The Particle Tracking Model (PTM) is a Lagrangian...currents and waves. The Coastal Inlets Research Program (CIRP) supports the PTM with the Coastal Modeling System ( CMS ), which provides coupled wave...and current forcing for PTM simulations. CMS -PTM is implemented in the Surface-water Modeling System, a GUI environment for input development

  6. Modelling Spatial Dependence Structures Between Climate Variables by Combining Mixture Models with Copula Models

    NASA Astrophysics Data System (ADS)

    Khan, F.; Pilz, J.; Spöck, G.

    2017-12-01

    Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan

  7. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation

    NASA Astrophysics Data System (ADS)

    Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.

    2016-12-01

    Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic

  8. A BRDF statistical model applying to space target materials modeling

    NASA Astrophysics Data System (ADS)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  9. Involvement of Fathers in Pediatric Obesity Treatment and Prevention Trials: A Systematic Review.

    PubMed

    Morgan, Philip J; Young, Myles D; Lloyd, Adam B; Wang, Monica L; Eather, Narelle; Miller, Andrew; Murtagh, Elaine M; Barnes, Alyce T; Pagoto, Sherry L

    2017-02-01

    Despite their important influence on child health, it is assumed that fathers are less likely than mothers to participate in pediatric obesity treatment and prevention research. This review investigated the involvement of fathers in obesity treatment and prevention programs targeting children and adolescents (0-18 years). A systematic review of English, peer-reviewed articles across 7 databases. Retrieved records included at least 1 search term from 2 groups: "participants" (eg, child*, parent*) and "outcomes": (eg, obes*, diet*). Randomized controlled trials (RCTs) assessing behavioral interventions to prevent or treat obesity in pediatric samples were eligible. Parents must have "actively participated" in the study. Two authors independently extracted data using a predefined template. The search retrieved 213 eligible RCTs. Of the RCTs that limited participation to 1 parent only (n = 80), fathers represented only 6% of parents. In RCTs in which participation was open to both parents (n = 133), 92% did not report objective data on father involvement. No study characteristics moderated the level of father involvement, with fathers underrepresented across all study types. Only 4 studies (2%) suggested that a lack of fathers was a possible limitation. Two studies (1%) reported explicit attempts to increase father involvement. The review was limited to RCTs published in English peer-reviewed journals over a 10-year period. Existing pediatric obesity treatment or prevention programs with parent involvement have not engaged fathers. Innovative strategies are needed to make participation more accessible and engaging for fathers. Copyright © 2017 by the American Academy of Pediatrics.

  10. Molecular mechanisms of hydrogen loaded B-hydroquinone clathrate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daschbach, John L.; Chang, Tsun-Mei; Corrales, Louis R.

    2006-09-07

    Molecular dynamics simulations are used to investigate the molecular interactions of hydrogen loaded beta-hydroquinone clathrate. It is found that at lower temperatures, higher loadings are more stable, whereas, at higher temperatures, lower loadings are more stable. This trend can be understood based on the interactions in the system. For loadings greater than one, the repulsive forces between the guest molecules shove each other towards the attractive forces between the guest and host molecules leading to a stabilized minimum energy configuration at low temperatures. At higher temperatures greater displacements take the system away from the shallow energy minimum and the trendmore » reverses. The asymmetries of the clathrate cage structure are due to the presence of the attractive forces at loadings greater than one that lead to confined states. The nature of the cavity structure is nearly spherical for a loading of one, leads to preferential occupation near the hydroxyl ring crowns of the cavity with a loading of two, and at higher loadings, leads to occupation of the interstitial sites (the hydroxyl rings) between cages by a single H2 molecule with the remaining molecules occupying the equatorial plane of the cavity. At higher temperatures, the cavity is more uniformly occupied for all loadings, where the occupation of the interstitial positions of the cavities leads to facile diffusion. ACKNOWLEDGEMENT This work was partially supported by NIDO (Japan), LDRD (PNNL), EERE U.S. Department of Energy, and by OBES, U.S. DOE. The Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy« less

  11. EIA model documentation: Petroleum market model of the national energy modeling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-12-28

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions, the production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supplymore » for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level.« less

  12. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    USGS Publications Warehouse

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  13. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  14. AIR QUALITY MODELING OF AMMONIA: A REGIONAL MODELING PERSPECTIVE

    EPA Science Inventory

    The talk will address the status of modeling of ammonia from a regional modeling perspective, yet the observations and comments should have general applicability. The air quality modeling system components that are central to modeling ammonia will be noted and a perspective on ...

  15. Temperature-based modeling of reference evapotranspiration using several artificial intelligence models: application of different modeling scenarios

    NASA Astrophysics Data System (ADS)

    Sanikhani, Hadi; Kisi, Ozgur; Maroufpoor, Eisa; Yaseen, Zaher Mundher

    2018-02-01

    The establishment of an accurate computational model for predicting reference evapotranspiration (ET0) process is highly essential for several agricultural and hydrological applications, especially for the rural water resource systems, water use allocations, utilization and demand assessments, and the management of irrigation systems. In this research, six artificial intelligence (AI) models were investigated for modeling ET0 using a small number of climatic data generated from the minimum and maximum temperatures of the air and extraterrestrial radiation. The investigated models were multilayer perceptron (MLP), generalized regression neural networks (GRNN), radial basis neural networks (RBNN), integrated adaptive neuro-fuzzy inference systems with grid partitioning and subtractive clustering (ANFIS-GP and ANFIS-SC), and gene expression programming (GEP). The implemented monthly time scale data set was collected at the Antalya and Isparta stations which are located in the Mediterranean Region of Turkey. The Hargreaves-Samani (HS) equation and its calibrated version (CHS) were used to perform a verification analysis of the established AI models. The accuracy of validation was focused on multiple quantitative metrics, including root mean squared error (RMSE), mean absolute error (MAE), correlation coefficient (R 2), coefficient of residual mass (CRM), and Nash-Sutcliffe efficiency coefficient (NS). The results of the conducted models were highly practical and reliable for the investigated case studies. At the Antalya station, the performance of the GEP and GRNN models was better than the other investigated models, while the performance of the RBNN and ANFIS-SC models was best compared to the other models at the Isparta station. Except for the MLP model, all the other investigated models presented a better performance accuracy compared to the HS and CHS empirical models when applied in a cross-station scenario. A cross-station scenario examination implies the

  16. The medical model versus the just deserts model.

    PubMed

    Wolfgang, M E

    1988-01-01

    This paper traces the history of two models that have been influential in shaping modern views toward criminals. One of these two--the medical model--is based on the concept of rehabilitation, that is, treatment predicated on the attributes of the offender. The second of these two--the just deserts model--centers on retribution, that is, punishment deserved for the seriousness of the crime. Each model has been dominant in various periods of history.

  17. DRI Model of the U.S. Economy -- Model Documentation

    EIA Publications

    1993-01-01

    Provides documentation on Data Resources, Inc., DRI Model of the U.S. Economy and the DRI Personal Computer Input/Output Model. It also describes the theoretical basis, structure and functions of both DRI models; and contains brief descriptions of the models and their equations.

  18. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  19. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  20. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  1. Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura

    2016-12-01

    We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.

  2. Modelling Students' Construction of Energy Models in Physics.

    ERIC Educational Resources Information Center

    Devi, Roshni; And Others

    1996-01-01

    Examines students' construction of experimentation models for physics theories in energy storage, transformation, and transfers involving electricity and mechanics. Student problem solving dialogs and artificial intelligence modeling of these processes is analyzed. Construction of models established relations between elements with linear causal…

  3. An empirical model to forecast solar wind velocity through statistical modeling

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Ridley, A. J.

    2013-12-01

    The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.

  4. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  5. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Spin-Up and Tuning of the Global Carbon Cycle Model Inside the GISS ModelE2 GCM

    NASA Technical Reports Server (NTRS)

    Aleinov, Igor; Kiang, Nancy Y.; Romanou, Anastasia

    2015-01-01

    Planetary carbon cycle involves multiple phenomena, acting at variety of temporal and spacial scales. The typical times range from minutes for leaf stomata physiology to centuries for passive soil carbon pools and deep ocean layers. So, finding a satisfactory equilibrium state becomes a challenging and computationally expensive task. Here we present the spin-up processes for different configurations of the GISS Carbon Cycle model from the model forced with MODIS observed Leaf Area Index (LAI) and prescribed ocean to the prognostic LAI and to the model fully coupled to the dynamic ocean and ocean biology. We investigate the time it takes the model to reach the equilibrium and discuss the ways to speed up this process. NASA Goddard Institute for Space Studies General Circulation Model (GISS ModelE2) is currently equipped with all major algorithms necessary for the simulation of the Global Carbon Cycle. The terrestrial part is presented by Ent Terrestrial Biosphere Model (Ent TBM), which includes leaf biophysics, prognostic phenology and soil biogeochemistry module (based on Carnegie-Ames-Stanford model). The ocean part is based on the NASA Ocean Biogeochemistry Model (NOBM). The transport of atmospheric CO2 is performed by the atmospheric part of ModelE2, which employs quadratic upstream algorithm for this purpose.

  7. Culturicon model: A new model for cultural-based emoticon

    NASA Astrophysics Data System (ADS)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  8. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  9. On temporal stochastic modeling of precipitation, nesting models across scales

    NASA Astrophysics Data System (ADS)

    Paschalis, Athanasios; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2014-01-01

    We analyze the performance of composite stochastic models of temporal precipitation which can satisfactorily reproduce precipitation properties across a wide range of temporal scales. The rationale is that a combination of stochastic precipitation models which are most appropriate for specific limited temporal scales leads to better overall performance across a wider range of scales than single models alone. We investigate different model combinations. For the coarse (daily) scale these are models based on Alternating renewal processes, Markov chains, and Poisson cluster models, which are then combined with a microcanonical Multiplicative Random Cascade model to disaggregate precipitation to finer (minute) scales. The composite models were tested on data at four sites in different climates. The results show that model combinations improve the performance in key statistics such as probability distributions of precipitation depth, autocorrelation structure, intermittency, reproduction of extremes, compared to single models. At the same time they remain reasonably parsimonious. No model combination was found to outperform the others at all sites and for all statistics, however we provide insight on the capabilities of specific model combinations. The results for the four different climates are similar, which suggests a degree of generality and wider applicability of the approach.

  10. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  11. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models.

    PubMed

    Gomes, Anna; van der Wijk, Lars; Proost, Johannes H; Sinha, Bhanu; Touw, Daan J

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  12. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    PubMed Central

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  13. Agent-based modeling: case study in cleavage furrow models

    PubMed Central

    Mogilner, Alex; Manhart, Angelika

    2016-01-01

    The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as “differential equation based” (DE) or “agent based” (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem—positioning of the cleavage furrow in dividing cells—to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. PMID:27811328

  14. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  15. Groundwater modelling in conceptual hydrological models - introducing space

    NASA Astrophysics Data System (ADS)

    Boje, Søren; Skaugen, Thomas; Møen, Knut; Myrabø, Steinar

    2017-04-01

    The tiny Sæternbekken Minifelt (Muren) catchment (7500 m2) in Bærumsmarka, Norway, was during the 1990s, densely instrumented with more than a 100 observation points for measuring groundwater levels. The aim was to investigate the link between shallow groundwater dynamics and runoff. The DDD (Distance Distribution Dynamics) model is a newly developed rainfall-runoff model used operationally by the Norwegian Flood-Forecasting service at NVE. The model estimates the capacity of the subsurface reservoir at different levels of saturation and predicts overland flow. The subsurface in the DDD model has a 2-D representation that calculates the saturated and unsaturated soil moisture along a hillslope representing the entire catchment in question. The groundwater observations from more than two decades ago are used to verify assumptions of the subsurface reservoir in the DDD model and to validate its spatial representation of the subsurface reservoir. The Muren catchment will, during 2017, be re-instrumented in order to continue the work to bridge the gap between conceptual hydrological models, with typically single value or 0-dimension representation of the subsurface, and models with more realistic 2- or 3-dimension representation of the subsurface.

  16. Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum

    2011-01-01

    Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…

  17. BioModels: expanding horizons to include more modelling approaches and formats

    PubMed Central

    Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi

    2018-01-01

    Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614

  18. Is the Voter Model a Model for Voters?

    NASA Astrophysics Data System (ADS)

    Fernández-Gracia, Juan; Suchecki, Krzysztof; Ramasco, José J.; San Miguel, Maxi; Eguíluz, Víctor M.

    2014-04-01

    The voter model has been studied extensively as a paradigmatic opinion dynamics model. However, its ability to model real opinion dynamics has not been addressed. We introduce a noisy voter model (accounting for social influence) with recurrent mobility of agents (as a proxy for social context), where the spatial and population diversity are taken as inputs to the model. We show that the dynamics can be described as a noisy diffusive process that contains the proper anisotropic coupling topology given by population and mobility heterogeneity. The model captures statistical features of U.S. presidential elections as the stationary vote-share fluctuations across counties and the long-range spatial correlations that decay logarithmically with the distance. Furthermore, it recovers the behavior of these properties when the geographical space is coarse grained at different scales—from the county level through congressional districts, and up to states. Finally, we analyze the role of the mobility range and the randomness in decision making, which are consistent with the empirical observations.

  19. Sequence modelling and an extensible data model for genomic database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Peter Wei-Der

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less

  20. Sequence modelling and an extensible data model for genomic database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Peter Wei-Der

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less

  1. Log-Multiplicative Association Models as Item Response Models

    ERIC Educational Resources Information Center

    Anderson, Carolyn J.; Yu, Hsiu-Ting

    2007-01-01

    Log-multiplicative association (LMA) models, which are special cases of log-linear models, have interpretations in terms of latent continuous variables. Two theoretical derivations of LMA models based on item response theory (IRT) arguments are presented. First, we show that Anderson and colleagues (Anderson & Vermunt, 2000; Anderson & Bockenholt,…

  2. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  3. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-02-26

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of amore » two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.« less

  4. Dynamic Emulation Modelling (DEMo) of large physically-based environmental models

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2012-12-01

    In environmental modelling large, spatially-distributed, physically-based models are widely adopted to describe the dynamics of physical, social and economic processes. Such an accurate process characterization comes, however, to a price: the computational requirements of these models are considerably high and prevent their use in any problem requiring hundreds or thousands of model runs to be satisfactory solved. Typical examples include optimal planning and management, data assimilation, inverse modelling and sensitivity analysis. An effective approach to overcome this limitation is to perform a top-down reduction of the physically-based model by identifying a simplified, computationally efficient emulator, constructed from and then used in place of the original model in highly resource-demanding tasks. The underlying idea is that not all the process details in the original model are equally important and relevant to the dynamics of the outputs of interest for the type of problem considered. Emulation modelling has been successfully applied in many environmental applications, however most of the literature considers non-dynamic emulators (e.g. metamodels, response surfaces and surrogate models), where the original dynamical model is reduced to a static map between input and the output of interest. In this study we focus on Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original physically-based model, with consequent advantages in a wide variety of problem areas. In particular, we propose a new data-driven DEMo approach that combines the many advantages of data-driven modelling in representing complex, non-linear relationships, but preserves the state-space representation typical of process-based models, which is both particularly effective in some applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the

  5. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  6. Models of ovarian cancer metastasis: Murine models

    PubMed Central

    Šale, Sanja; Orsulic, Sandra

    2008-01-01

    Mice have mainly been used in ovarian cancer research as immunodeficient hosts for cell lines derived from the primary tumors and ascites of ovarian cancer patients. These xenograft models have provided a valuable system for pre-clinical trials, however, the genetic complexity of human tumors has precluded the understanding of key events that drive metastatic dissemination. Recently developed immunocompetent, genetically defined mouse models of epithelial ovarian cancer represent significant improvements in the modeling of metastatic disease. PMID:19337569

  7. Addressing Hydro-economic Modeling Limitations - A Limited Foresight Sacramento Valley Model and an Open-source Modeling Platform

    NASA Astrophysics Data System (ADS)

    Harou, J. J.; Hansen, K. M.

    2008-12-01

    Increased scarcity of world water resources is inevitable given the limited supply and increased human pressures. The idea that "some scarcity is optimal" must be accepted for rational resource use and infrastructure management decisions to be made. Hydro-economic systems models are unique at representing the overlap of economic drivers, socio-political forces and distributed water resource systems. They demonstrate the tangible benefits of cooperation and integrated flexible system management. Further improvement of models, quality control practices and software will be needed for these academic policy tools to become accepted into mainstream water resource practice. Promising features include: calibration methods, limited foresight optimization formulations, linked simulation-optimization approaches (e.g. embedding pre-existing calibrated simulation models), spatial groundwater models, stream-aquifer interactions and stream routing, etc.. Conventional user-friendly decision support systems helped spread simulation models on a massive scale. Hydro-economic models must also find a means to facilitate construction, distribution and use. Some of these issues and model features are illustrated with a hydro-economic optimization model of the Sacramento Valley. Carry-over storage value functions are used to limit hydrologic foresight of the multi- period optimization model. Pumping costs are included in the formulation by tracking regional piezometric head of groundwater sub-basins. To help build and maintain this type of network model, an open-source water management modeling software platform is described and initial project work is discussed. The objective is to generically facilitate the connection of models, such as those developed in a modeling environment (GAMS, MatLab, Octave, "), to a geographic user interface (drag and drop node-link network) and a database (topology, parameters and time series). These features aim to incrementally move hydro- economic models

  8. JEDI International Model | Jobs and Economic Development Impact Models |

    Science.gov Websites

    NREL International Model JEDI International Model The Jobs and Economic Development Impacts (JEDI) International Model allows users to estimate economic development impacts from international

  9. Modeling of the Global Water Cycle - Analytical Models

    Treesearch

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  10. Seven challenges for metapopulation models of epidemics, including households models.

    PubMed

    Ball, Frank; Britton, Tom; House, Thomas; Isham, Valerie; Mollison, Denis; Pellis, Lorenzo; Scalia Tomba, Gianpaolo

    2015-03-01

    This paper considers metapopulation models in the general sense, i.e. where the population is partitioned into sub-populations (groups, patches,...), irrespective of the biological interpretation they have, e.g. spatially segregated large sub-populations, small households or hosts themselves modelled as populations of pathogens. This framework has traditionally provided an attractive approach to incorporating more realistic contact structure into epidemic models, since it often preserves analytic tractability (in stochastic as well as deterministic models) but also captures the most salient structural inhomogeneity in contact patterns in many applied contexts. Despite the progress that has been made in both the theory and application of such metapopulation models, we present here several major challenges that remain for future work, focusing on models that, in contrast to agent-based ones, are amenable to mathematical analysis. The challenges range from clarifying the usefulness of systems of weakly-coupled large sub-populations in modelling the spread of specific diseases to developing a theory for endemic models with household structure. They include also developing inferential methods for data on the emerging phase of epidemics, extending metapopulation models to more complex forms of human social structure, developing metapopulation models to reflect spatial population structure, developing computationally efficient methods for calculating key epidemiological model quantities, and integrating within- and between-host dynamics in models. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  11. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  12. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  13. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification

    PubMed Central

    Sager, Jennifer E.; Yu, Jingjing; Ragueneau-Majlessi, Isabelle

    2015-01-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms “PBPK” and “physiologically based pharmacokinetic model” to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines. PMID:26296709

  14. Bayesian Modeling of Exposure and Airflow Using Two-Zone Models

    PubMed Central

    Zhang, Yufen; Banerjee, Sudipto; Yang, Rui; Lungu, Claudiu; Ramachandran, Gurumurthy

    2009-01-01

    Mathematical modeling is being increasingly used as a means for assessing occupational exposures. However, predicting exposure in real settings is constrained by lack of quantitative knowledge of exposure determinants. Validation of models in occupational settings is, therefore, a challenge. Not only do the model parameters need to be known, the models also need to predict the output with some degree of accuracy. In this paper, a Bayesian statistical framework is used for estimating model parameters and exposure concentrations for a two-zone model. The model predicts concentrations in a zone near the source and far away from the source as functions of the toluene generation rate, air ventilation rate through the chamber, and the airflow between near and far fields. The framework combines prior or expert information on the physical model along with the observed data. The framework is applied to simulated data as well as data obtained from the experiments conducted in a chamber. Toluene vapors are generated from a source under different conditions of airflow direction, the presence of a mannequin, and simulated body heat of the mannequin. The Bayesian framework accounts for uncertainty in measurement as well as in the unknown rate of airflow between the near and far fields. The results show that estimates of the interzonal airflow are always close to the estimated equilibrium solutions, which implies that the method works efficiently. The predictions of near-field concentration for both the simulated and real data show nice concordance with the true values, indicating that the two-zone model assumptions agree with the reality to a large extent and the model is suitable for predicting the contaminant concentration. Comparison of the estimated model and its margin of error with the experimental data thus enables validation of the physical model assumptions. The approach illustrates how exposure models and information on model parameters together with the knowledge of

  15. Comparison of chiller models for use in model-based fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya; Haves, Philip

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  16. Equivalent Dynamic Models.

    PubMed

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  17. From Spiking Neuron Models to Linear-Nonlinear Models

    PubMed Central

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-01

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777

  18. From spiking neuron models to linear-nonlinear models.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  19. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  20. IHY Modeling Support at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Chulaki, A.; Hesse, Michael; Kuznetsova, Masha; MacNeice, P.; Rastaetter, L.

    2005-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-onrequest" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities during the International Heliospheric Year. In order to tailor CCMC activities to IHY needs, we will also invite community input into our IHY planning activities.

  1. Energy modeling. Volume 2: Inventory and details of state energy models

    NASA Astrophysics Data System (ADS)

    Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.

    1981-05-01

    An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.

  2. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  3. Modeling of batch sorber system: kinetic, mechanistic, and thermodynamic modeling

    NASA Astrophysics Data System (ADS)

    Mishra, Vishal

    2017-10-01

    The present investigation has dealt with the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase. Various rate models were evaluated to elucidate the kinetics of copper and zinc biosorptions, and the results indicated that the pseudo-second-order model was more appropriate than the pseudo-first-order model. The curve of the initial sorption rate versus the initial concentration of copper and zinc ions also complemented the results of the pseudo-second-order model. Models used for the mechanistic modeling were the intra-particle model of pore diffusion and Bangham's model of film diffusion. The results of the mechanistic modeling together with the values of pore and film diffusivities indicated that the preferential mode of the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase was film diffusion. The results of the intra-particle model showed that the biosorption of the copper and zinc ions was not dominated by the pore diffusion, which was due to macro-pores with open-void spaces present on the surface of egg-shell particles. The thermodynamic modeling reproduced the fact that the sorption of copper and zinc was spontaneous, exothermic with the increased order of the randomness at the solid-liquid interface.

  4. SWIFT MODELLER: a Java based GUI for molecular modeling.

    PubMed

    Mathur, Abhinav; Shankaracharya; Vidyarthi, Ambarish S

    2011-10-01

    MODELLER is command line argument based software which requires tedious formatting of inputs and writing of Python scripts which most people are not comfortable with. Also the visualization of output becomes cumbersome due to verbose files. This makes the whole software protocol very complex and requires extensive study of MODELLER manuals and tutorials. Here we describe SWIFT MODELLER, a GUI that automates formatting, scripting and data extraction processes and present it in an interactive way making MODELLER much easier to use than before. The screens in SWIFT MODELLER are designed keeping homology modeling in mind and their flow is a depiction of its steps. It eliminates the formatting of inputs, scripting processes and analysis of verbose output files through automation and makes pasting of the target sequence as the only prerequisite. Jmol (3D structure visualization tool) has been integrated into the GUI which opens and demonstrates the protein data bank files created by the MODELLER software. All files required and created by the software are saved in a folder named after the work instance's date and time of execution. SWIFT MODELLER lowers the skill level required for the software through automation of many of the steps in the original software protocol, thus saving an enormous amount of time per instance and making MODELLER very easy to work with.

  5. Petroleum Market Model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcoholsmore » and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.« less

  6. Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread

    PubMed Central

    Miller, Joel C.; Volz, Erik M.

    2012-01-01

    We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242

  7. The Relationships between Modelling and Argumentation from the Perspective of the Model of Modelling Diagram

    ERIC Educational Resources Information Center

    Mendonça, Paula Cristina Cardoso; Justi, Rosária

    2013-01-01

    Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities…

  8. Multiscale Modeling of Structurally-Graded Materials Using Discrete Dislocation Plasticity Models and Continuum Crystal Plasticity Models

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Hochhalter, Jacob D.; Glaessgen, Edward H.

    2012-01-01

    A multiscale modeling methodology that combines the predictive capability of discrete dislocation plasticity and the computational efficiency of continuum crystal plasticity is developed. Single crystal configurations of different grain sizes modeled with periodic boundary conditions are analyzed using discrete dislocation plasticity (DD) to obtain grain size-dependent stress-strain predictions. These relationships are mapped into crystal plasticity parameters to develop a multiscale DD/CP model for continuum level simulations. A polycrystal model of a structurally-graded microstructure is developed, analyzed and used as a benchmark for comparison between the multiscale DD/CP model and the DD predictions. The multiscale DD/CP model follows the DD predictions closely up to an initial peak stress and then follows a strain hardening path that is parallel but somewhat offset from the DD predictions. The difference is believed to be from a combination of the strain rate in the DD simulation and the inability of the DD/CP model to represent non-monotonic material response.

  9. Modeling influenza-like illnesses through composite compartmental models

    NASA Astrophysics Data System (ADS)

    Levy, Nir; , Michael, Iv; Yom-Tov, Elad

    2018-03-01

    Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.

  10. Continuous system modeling

    NASA Technical Reports Server (NTRS)

    Cellier, Francois E.

    1991-01-01

    A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.

  11. Agent-based modeling and systems dynamics model reproduction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Macal, C. M.

    2009-01-01

    Reproducibility is a pillar of the scientific endeavour. We view computer simulations as laboratories for electronic experimentation and therefore as tools for science. Recent studies have addressed model reproduction and found it to be surprisingly difficult to replicate published findings. There have been enough failed simulation replications to raise the question, 'can computer models be fully replicated?' This paper answers in the affirmative by reporting on a successful reproduction study using Mathematica, Repast and Swarm for the Beer Game supply chain model. The reproduction process was valuable because it demonstrated the original result's robustness across modelling methodologies and implementation environments.

  12. National Transonic Facility model and model support vibration problems

    NASA Technical Reports Server (NTRS)

    Young, Clarence P., Jr.; Popernack, Thomas G., Jr.; Gloss, Blair B.

    1990-01-01

    Vibrations of models and model support system were encountered during testing in the National Transonic Facility. Model support system yaw plane vibrations have resulted in model strain gage balance design load limits being reached. These high levels of vibrations resulted in limited aerodynamic testing for several wind tunnel models. The yaw vibration problem was the subject of an intensive experimental and analytical investigation which identified the primary source of the yaw excitation and resulted in attenuation of the yaw oscillations to acceptable levels. This paper presents the principal results of analyses and experimental investigation of the yaw plane vibration problems. Also, an overview of plans for development and installation of a permanent model system dynamic and aeroelastic response measurement and monitoring system for the National Transonic Facility is presented.

  13. An improved interfacial bonding model for material interface modeling

    PubMed Central

    Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei

    2016-01-01

    An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343

  14. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial

  15. Coupled atmosphere-biophysics-hydrology models for environmental modeling

    USGS Publications Warehouse

    Walko, R.L.; Band, L.E.; Baron, Jill S.; Kittel, T.G.F.; Lammers, R.; Lee, T.J.; Ojima, D.; Pielke, R.A.; Taylor, C.; Tague, C.; Tremback, C.J.; Vidale, P.L.

    2000-01-01

    The formulation and implementation of LEAF-2, the Land Ecosystem–Atmosphere Feedback model, which comprises the representation of land–surface processes in the Regional Atmospheric Modeling System (RAMS), is described. LEAF-2 is a prognostic model for the temperature and water content of soil, snow cover, vegetation, and canopy air, and includes turbulent and radiative exchanges between these components and with the atmosphere. Subdivision of a RAMS surface grid cell into multiple areas of distinct land-use types is allowed, with each subgrid area, or patch, containing its own LEAF-2 model, and each patch interacts with the overlying atmospheric column with a weight proportional to its fractional area in the grid cell. A description is also given of TOPMODEL, a land hydrology model that represents surface and subsurface downslope lateral transport of groundwater. Details of the incorporation of a modified form of TOPMODEL into LEAF-2 are presented. Sensitivity tests of the coupled system are presented that demonstrate the potential importance of the patch representation and of lateral water transport in idealized model simulations. Independent studies that have applied LEAF-2 and verified its performance against observational data are cited. Linkage of RAMS and TOPMODEL through LEAF-2 creates a modeling system that can be used to explore the coupled atmosphere–biophysical–hydrologic response to altered climate forcing at local watershed and regional basin scales.

  16. Modeling fractal cities using the correlated percolation model.

    NASA Astrophysics Data System (ADS)

    Makse, Hernán A.; Havlin, Shlomo; Stanley, H. Eugene

    1996-03-01

    Cities grow in a way that might be expected to resemble the growth of two-dimensional aggregates of particles, and this has led to recent attempts to model urban growth using ideas from the statistical physics of clusters. In particular, the model of diffusion limited aggregation (DLA) has been invoked to rationalize the apparently fractal nature of urban morphologies(M. Batty and P. Longley, Fractal Cities) (Academic, San Diego, 1994). The DLA model predicts that there should exist only one large fractal cluster, which is almost perfectly screened from incoming 'development units' (representing, for example, people, capital or resources), so that almost all of the cluster growth takes place at the tips of the cluster's branches. We show that an alternative model(H. A. Makse, S. Havlin, H. E. Stanley, Nature 377), 608 (1995), in which development units are correlated rather than being added to the cluster at random, is better able to reproduce the observed morphology of cities and the area distribution of sub-clusters ('towns') in an urban system, and can also describe urban growth dynamics. Our physical model, which corresponds to the correlated percolation model in the presence of a density gradient, is motivated by the fact that in urban areas development attracts further development. The model offers the possibility of predicting the global properties (such as scaling behavior) of urban morphologies.

  17. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model

    ERIC Educational Resources Information Center

    Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.

    2011-01-01

    The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…

  18. Phoenix model

    EPA Science Inventory

    Phoenix (formerly referred to as the Second Generation Model or SGM) is a global general equilibrium model designed to analyze energy-economy-climate related questions and policy implications in the medium- to long-term. This model disaggregates the global economy into 26 industr...

  19. BioModels Database: a repository of mathematical models of biological processes.

    PubMed

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  20. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014

  1. Orbital Debris Modeling

    NASA Technical Reports Server (NTRS)

    Liou, J. C.

    2012-01-01

    Presentation outlne: (1) The NASA Orbital Debris (OD) Engineering Model -- A mathematical model capable of predicting OD impact risks for the ISS and other critical space assets (2) The NASA OD Evolutionary Model -- A physical model capable of predicting future debris environment based on user-specified scenarios (3) The NASA Standard Satellite Breakup Model -- A model describing the outcome of a satellite breakup (explosion or collision)

  2. Combining abiotic and biotic models - Hydraulical modeling to fill the gap between catchment and hydro-dynamic models

    NASA Astrophysics Data System (ADS)

    Guse, B.; Sulc, D.; Schmalz, B.; Fohrer, N.

    2012-04-01

    The European Water Framework Directive (WFD) requires a catchment-based approach, which is assessed in the IMPACT project by combining abiotic and biotic models. The core point of IMPACT is a model chain (catchment model -> 1-D-hydraulic model -> 3-D-hydro-morphodynamic model -> biotic habitat model) with the aim to estimate the occurrence of the target species of the WFD. Firstly, the model chain is developed for the current land use and climate conditions. Secondly, land use and climate change scenarios are developed at the catchment scale. The outputs of the catchment model for the scenarios are used as input for the next models within the model chain to estimate the effect of these changes on the target species. The eco-hydrological catchment model SWAT is applied for the Treene catchment in Northern Germany and delivers discharge and water quality parameters as a spatial explicit output for each subbasin. There is no water level information given by SWAT. However, water level values are needed as lower boundary condition for the hydro-dynamic and habitat models which are applied for the 300 m candidate reference reach. In order to fill the gap between the catchment and the hydro-morphodynamic model, the 1-D hydraulic model HEC-RAS is applied for a 3 km long reach transect from the next upstream hydrological station until the upper bound of the candidate study reach. The channel geometry for HEC-RAS was estimated based on 96 cross-sections which were measured in the IMPACT project. By using available discharge and water level measurements from the hydrological station and own flow velocity measurements, the channel resistence was estimated. HEC-RAS was run with different statistical indices (mean annual drought, mean discharge, …) for steady flow conditions. The rating curve was then constructed for the target cross-section, i.e. the lower bound of the candidate study reach, to fulfill the combining with the hydro- and morphodynamic models. These statistical

  3. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  4. Model verification of large structural systems. [space shuttle model response

    NASA Technical Reports Server (NTRS)

    Lee, L. T.; Hasselman, T. K.

    1978-01-01

    A computer program for the application of parameter identification on the structural dynamic models of space shuttle and other large models with hundreds of degrees of freedom is described. Finite element, dynamic, analytic, and modal models are used to represent the structural system. The interface with math models is such that output from any structural analysis program applied to any structural configuration can be used directly. Processed data from either sine-sweep tests or resonant dwell tests are directly usable. The program uses measured modal data to condition the prior analystic model so as to improve the frequency match between model and test. A Bayesian estimator generates an improved analytical model and a linear estimator is used in an iterative fashion on highly nonlinear equations. Mass and stiffness scaling parameters are generated for an improved finite element model, and the optimum set of parameters is obtained in one step.

  5. Modeling Operations Other Than War: Non-Combatants in Combat Modeling

    DTIC Science & Technology

    1994-09-01

    supposition that non-combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . The model...combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . Thi- model also includes...numerical example demonstrated that the model appeared to perform in an acceptable manner, in that it produced output within a reasonable range. During the

  6. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, James C.

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  7. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  8. Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors

    NASA Astrophysics Data System (ADS)

    Marti, Alejandro; Folch, Arnau

    2018-03-01

    Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally

  9. Pharmacokinetic modeling in aquatic animals. 1. Models and concepts

    USGS Publications Warehouse

    Barron, M.G.; Stehly, Guy R.; Hayton, W.L.

    1990-01-01

    While clinical and toxicological applications of pharmacokinetics have continued to evolve both conceptually and experimentally, pharmacokinetics modeling in aquatic animals has not progressed accordingly. In this paper we present methods and concepts of pharmacokinetic modeling in aquatic animals using multicompartmental, clearance-based, non-compartmental and physiologically-based pharmacokinetic models. These models should be considered as alternatives to traditional approaches, which assume that the animal acts as a single homogeneous compartment based on apparent monoexponential elimination.

  10. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    PubMed

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  11. A Generic Modeling Process to Support Functional Fault Model Development

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  12. Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program

    NASA Technical Reports Server (NTRS)

    Moore, Berrien, III; Sahagian, Dork

    1997-01-01

    The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.

  13. REGIONAL PARTICULATE MODEL - 1. MODEL DESCRIPTION AND PRELIMINARY RESULTS

    EPA Science Inventory

    The gas-phase chemistry and transport mechanisms of the Regional Acid Deposition Model have been modified to create the Regional Particulate Model, a three-dimensional Eulerian model that simulates the chemistry, transport, and dynamics of sulfuric acid aerosol resulting from pri...

  14. Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis

    DTIC Science & Technology

    2017-02-01

    Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning

  15. WASP TRANSPORT MODELING AND WASP ECOLOGICAL MODELING

    EPA Science Inventory

    A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...

  16. Integrated Exoplanet Modeling with the GSFC Exoplanet Modeling & Analysis Center (EMAC)

    NASA Astrophysics Data System (ADS)

    Mandell, Avi M.; Hostetter, Carl; Pulkkinen, Antti; Domagal-Goldman, Shawn David

    2018-01-01

    Our ability to characterize the atmospheres of extrasolar planets will be revolutionized by JWST, WFIRST and future ground- and space-based telescopes. In preparation, the exoplanet community must develop an integrated suite of tools with which we can comprehensively predict and analyze observations of exoplanets, in order to characterize the planetary environments and ultimately search them for signs of habitability and life.The GSFC Exoplanet Modeling and Analysis Center (EMAC) will be a web-accessible high-performance computing platform with science support for modelers and software developers to host and integrate their scientific software tools, with the goal of leveraging the scientific contributions from the entire exoplanet community to improve our interpretations of future exoplanet discoveries. Our suite of models will include stellar models, models for star-planet interactions, atmospheric models, planet system science models, telescope models, instrument models, and finally models for retrieving signals from observational data. By integrating this suite of models, the community will be able to self-consistently calculate the emergent spectra from the planet whether from emission, scattering, or in transmission, and use these simulations to model the performance of current and new telescopes and their instrumentation.The EMAC infrastructure will not only provide a repository for planetary and exoplanetary community models, modeling tools and intermodal comparisons, but it will include a "run-on-demand" portal with each software tool hosted on a separate virtual machine. The EMAC system will eventually include a means of running or “checking in” new model simulations that are in accordance with the community-derived standards. Additionally, the results of intermodal comparisons will be used to produce open source publications that quantify the model comparisons and provide an overview of community consensus on model uncertainties on the climates of

  17. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    NASA Technical Reports Server (NTRS)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  18. Modelling Complex Fenestration Systems using physical and virtual models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanachareonkit, Anothai; Scartezzini, Jean-Louis

    2010-04-15

    Physical or virtual models are commonly used to visualize the conceptual ideas of architects, lighting designers and researchers; they are also employed to assess the daylighting performance of buildings, particularly in cases where Complex Fenestration Systems (CFS) are considered. Recent studies have however revealed a general tendency of physical models to over-estimate this performance, compared to those of real buildings; these discrepancies can be attributed to several reasons. In order to identify the main error sources, a series of comparisons in-between a real building (a single office room within a test module) and the corresponding physical and virtual models wasmore » undertaken. The physical model was placed in outdoor conditions, which were strictly identical to those of the real building, as well as underneath a scanning sky simulator. The virtual model simulations were carried out by way of the Radiance program using the GenSky function; an alternative evaluation method, named Partial Daylight Factor method (PDF method), was also employed with the physical model together with sky luminance distributions acquired by a digital sky scanner during the monitoring of the real building. The overall daylighting performance of physical and virtual models were assessed and compared. The causes of discrepancies between the daylighting performance of the real building and the models were analysed. The main identified sources of errors are the reproduction of building details, the CFS modelling and the mocking-up of the geometrical and photometrical properties. To study the impact of these errors on daylighting performance assessment, computer simulation models created using the Radiance program were also used to carry out a sensitivity analysis of modelling errors. The study of the models showed that large discrepancies can occur in daylighting performance assessment. In case of improper mocking-up of the glazing for instance, relative divergences of 25

  19. The Trimeric Model: A New Model of Periodontal Treatment Planning

    PubMed Central

    Tarakji, Bassel

    2014-01-01

    Treatment of periodontal disease is a complex and multidisciplinary procedure, requiring periodontal, surgical, restorative, and orthodontic treatment modalities. Several authors attempted to formulate models for periodontal treatment that orders the treatment steps in a logical and easy to remember manner. In this article, we discuss two models of periodontal treatment planning from two of the most well-known textbook in the specialty of periodontics internationally. Then modify them to arrive at a new model of periodontal treatment planning, The Trimeric Model. Adding restorative and orthodontic interrelationships with periodontal treatment allows us to expand this model into the Extended Trimeric Model of periodontal treatment planning. These models will provide a logical framework and a clear order of the treatment of periodontal disease for general practitioners and periodontists alike. PMID:25177662

  20. Modelling total solar irradiance using a flux transport model

    NASA Astrophysics Data System (ADS)

    Dasi Espuig, Maria; Jiang, Jie; Krivova, Natalie; Solanki, Sami

    2014-05-01

    Reconstructions of solar irradiance into the past are of considerable interest for studies of solar influence on climate. Models based on the assumption that irradiance changes are caused by the evolution of the photospheric magnetic field have been the most successful in reproducing the measured irradiance variations. Our SATIRE-S model is one of these. It uses solar full-disc magnetograms as an input, and these are available for less than four decades. Thus, to reconstruct the irradiance back to times when no observed magnetograms are available, we combine the SATIRE-S model with synthetic magnetograms, produced using a surface flux transport model. The model is fed with daily, observed or modelled statistically, records of sunspot positions, areas, and tilt angles. To describe the secular change in the irradiance, we used the concept of overlapping ephemeral region cycles. With this technique TSI can be reconstructed back to 1700.

  1. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  2. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  3. Evaluating Conceptual Site Models with Multicomponent Reactive Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, Z.; Heffner, D.; Price, V.; Temples, T. J.; Nicholson, T. J.

    2005-05-01

    Modeling ground-water flow and multicomponent reactive chemical transport is a useful approach for testing conceptual site models and assessing the design of monitoring networks. A graded approach with three conceptual site models is presented here with a field case of tetrachloroethene (PCE) transport and biodegradation near Charleston, SC. The first model assumed a one-layer homogeneous aquifer structure with semi-infinite boundary conditions, in which an analytical solution of the reactive solute transport can be obtained with BIOCHLOR (Aziz et al., 1999). Due to the over-simplification of the aquifer structure, this simulation cannot reproduce the monitoring data. In the second approach we used GMS to develop the conceptual site model, a layer-cake multi-aquifer system, and applied a numerical module (MODFLOW and RT3D within GMS) to solve the flow and reactive transport problem. The results were better than the first approach but still did not fit the plume well because the geological structures were still inadequately defined. In the third approach we developed a complex conceptual site model by interpreting log and seismic survey data with Petra and PetraSeis. We detected a major channel and a younger channel, through the PCE source area. These channels control the local ground-water flow direction and provide a preferential chemical transport pathway. Results using the third conceptual site model agree well with the monitoring concentration data. This study confirms that the bias and uncertainty from inadequate conceptual models are much larger than those introduced from an inadequate choice of model parameter values (Neuman and Wierenga, 2003; Meyer et al., 2004). Numerical modeling in this case provides key insight into the hydrogeology and geochemistry of the field site for predicting contaminant transport in the future. Finally, critical monitoring points and performance indicator parameters are selected for future monitoring to confirm system

  4. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  5. Selected aspects of modelling monetary transmission mechanism by BVAR model

    NASA Astrophysics Data System (ADS)

    Vaněk, Tomáš; Dobešová, Anna; Hampel, David

    2013-10-01

    In this paper we use the BVAR model with the specifically defined prior to evaluate data including high-lag dependencies. The results are compared to both restricted and common VAR model. The data depicts the monetary transmission mechanism in the Czech Republic and Slovakia from January 2002 to February 2013. The results point to the inadequacy of the common VAR model. The restricted VAR model and the BVAR model appear to be similar in the sense of impulse responses.

  6. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    PubMed Central

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  7. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    PubMed

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  8. EPA EXPOSURE MODELS LIBRARY AND INTEGRATED MODEL EVALUATION SYSTEM

    EPA Science Inventory

    The third edition of the U.S. Environmental Protection Agencys (EPA) EML/IMES (Exposure Models Library and Integrated Model Evaluation System) on CD-ROM is now available. The purpose of the disc is to provide a compact and efficient means to distribute exposure models, documentat...

  9. Modelling the Shuttle Remote Manipulator System: Another flexible model

    NASA Technical Reports Server (NTRS)

    Barhorst, Alan A.

    1993-01-01

    High fidelity elastic system modeling algorithms are discussed. The particular system studied is the Space Shuttle Remote Manipulator System (RMS) undergoing full articulated motion. The model incorporates flexibility via a methodology the author has been developing. The technique is based in variational principles, so rigorous boundary condition generation and weak formulations for the associated partial differential equations are realized, yet the analyst need not integrate by parts. The methodology is formulated using vector-dyad notation with minimal use of tensor notation, therefore the technique is believed to be affable to practicing engineers. The objectives of this work are as follows: (1) determine the efficacy of the modeling method; and (2) determine if the method affords an analyst advantages in the overall modeling and simulation task. Generated out of necessity were Mathematica algorithms that quasi-automate the modeling procedure and simulation development. The project was divided into sections as follows: (1) model development of a simplified manipulator; (2) model development of the full-freedom RMS including a flexible movable base on a six degree of freedom orbiter (a rigid-body is attached to the manipulator end-effector); (3) simulation development for item 2; and (4) comparison to the currently used model of the flexible RMS in the Structures and Mechanics Division of NASA JSC. At the time of the writing of this report, items 3 and 4 above were not complete.

  10. Two-Stage Bayesian Model Averaging in Endogenous Variable Models*

    PubMed Central

    Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.

    2013-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  11. Generic magnetohydrodynamic model at the Community Coordinated Modeling Center

    NASA Astrophysics Data System (ADS)

    Honkonen, I. J.; Rastaetter, L.; Glocer, A.

    2016-12-01

    The Community Coordinated Modeling Center (CCMC) at NASA Goddard Space Flight Center is a multi-agency partnership to enable, support and perform research and development for next-generation space science and space weather models. CCMC currently hosts nearly 100 numerical models and a cornerstone of this activity is the Runs on Request (RoR) system which allows anyone to request a model run and analyse/visualize the results via a web browser. CCMC is also active in the education community by organizing student research contests, heliophysics summer schools, and space weather forecaster training for students, government and industry representatives. Recently a generic magnetohydrodynamic (MHD) model was added to the CCMC RoR system which allows the study of a variety of fluid and plasma phenomena in one, two and three dimensions using a dynamic point-and-click web interface. For example students can experiment with the physics of fundamental wave modes of hydrodynamic and MHD theory, behavior of discontinuities and shocks as well as instabilities such as Kelvin-Helmholtz.Students can also use the model to experiments with numerical effects of models, i.e. how the process of discretizing a system of equations and solving them on a computer changes the solution. This can provide valuable background understanding e.g. for space weather forecasters on the effects of model resolution, numerical resistivity, etc. on the prediction.

  12. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  13. Constraints Modeling in FRBR Data Model Using OCL

    NASA Astrophysics Data System (ADS)

    Rudić, Gordana

    2011-09-01

    Transformation of the conceptual FRBR data model to the class diagram in UML 2.0 notation is given. The class diagram is formed using MagicDraw CASE tool. The paper presents a class diagram for the first group of FRBR entities ie. classes (the product of intellectual or artistic endeavour). It is demonstrated how to model constraints over relationships between classes in FRBR object data model using OCL 2.0.

  14. Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.

    1997-01-01

    In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.

  15. The CAFE model: A net production model for global ocean phytoplankton

    NASA Astrophysics Data System (ADS)

    Silsbe, Greg M.; Behrenfeld, Michael J.; Halsey, Kimberly H.; Milligan, Allen J.; Westberry, Toby K.

    2016-12-01

    The Carbon, Absorption, and Fluorescence Euphotic-resolving (CAFE) net primary production model is an adaptable framework for advancing global ocean productivity assessments by exploiting state-of-the-art satellite ocean color analyses and addressing key physiological and ecological attributes of phytoplankton. Here we present the first implementation of the CAFE model that incorporates inherent optical properties derived from ocean color measurements into a mechanistic and accurate model of phytoplankton growth rates (μ) and net phytoplankton production (NPP). The CAFE model calculates NPP as the product of energy absorption (QPAR), and the efficiency (ϕμ) by which absorbed energy is converted into carbon biomass (CPhyto), while μ is calculated as NPP normalized to CPhyto. The CAFE model performance is evaluated alongside 21 other NPP models against a spatially robust and globally representative set of direct NPP measurements. This analysis demonstrates that the CAFE model explains the greatest amount of variance and has the lowest model bias relative to other NPP models analyzed with this data set. Global oceanic NPP from the CAFE model (52 Pg C m-2 yr-1) and mean division rates (0.34 day-1) are derived from climatological satellite data (2002-2014). This manuscript discusses and validates individual CAFE model parameters (e.g., QPAR and ϕμ), provides detailed sensitivity analyses, and compares the CAFE model results and parameterization to other widely cited models.

  16. A review of surrogate models and their application to groundwater modeling

    NASA Astrophysics Data System (ADS)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  17. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    PubMed

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  18. Modeling Ni-Cd performance. Planned alterations to the Goddard battery model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1986-01-01

    The Goddard Space Flight Center (GSFC) currently has a preliminary computer model to simulate a Nickel Cadmium (Ni-Cd) performance. The basic methodology of the model was described in the paper entitled Fundamental Algorithms of the Goddard Battery Model. At present, the model is undergoing alterations to increase its efficiency, accuracy, and generality. A review of the present battery model is given, and the planned charges of the model are described.

  19. A Test of Maxwell's Z Model Using Inverse Modeling

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Schultz, P. H.; Heineck, T.

    2003-01-01

    In modeling impact craters a small region of energy and momentum deposition, commonly called a "point source", is often assumed. This assumption implies that an impact is the same as an explosion at some depth below the surface. Maxwell's Z Model, an empirical point-source model derived from explosion cratering, has previously been compared with numerical impact craters with vertical incidence angles, leading to two main inferences. First, the flowfield center of the Z Model must be placed below the target surface in order to replicate numerical impact craters. Second, for vertical impacts, the flow-field center cannot be stationary if the value of Z is held constant; rather, the flow-field center migrates downward as the crater grows. The work presented here evaluates the utility of the Z Model for reproducing both vertical and oblique experimental impact data obtained at the NASA Ames Vertical Gun Range (AVGR). Specifically, ejection angle data obtained through Three-Dimensional Particle Image Velocimetry (3D PIV) are used to constrain the parameters of Maxwell's Z Model, including the value of Z and the depth and position of the flow-field center via inverse modeling.

  20. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  1. Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C

    System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less

  2. Forest-fire models

    Treesearch

    Haiganoush Preisler; Alan Ager

    2013-01-01

    For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...

  3. SPITFIRE within the MPI Earth system model: Model development and evaluation

    NASA Astrophysics Data System (ADS)

    Lasslop, Gitta; Thonicke, Kirsten; Kloster, Silvia

    2014-09-01

    Quantification of the role of fire within the Earth system requires an adequate representation of fire as a climate-controlled process within an Earth system model. To be able to address questions on the interaction between fire and the Earth system, we implemented the mechanistic fire model SPITFIRE, in JSBACH, the land surface model of the MPI Earth system model. Here, we document the model implementation as well as model modifications. We evaluate our model results by comparing the simulation to the GFED version 3 satellite-based data set. In addition, we assess the sensitivity of the model to the meteorological forcing and to the spatial variability of a number of fire relevant model parameters. A first comparison of model results with burned area observations showed a strong correlation of the residuals with wind speed. Further analysis revealed that the response of the fire spread to wind speed was too strong for the application on global scale. Therefore, we developed an improved parametrization to account for this effect. The evaluation of the improved model shows that the model is able to capture the global gradients and the seasonality of burned area. Some areas of model-data mismatch can be explained by differences in vegetation cover compared to observations. We achieve benchmarking scores comparable to other state-of-the-art fire models. The global total burned area is sensitive to the meteorological forcing. Adjustment of parameters leads to similar model results for both forcing data sets with respect to spatial and seasonal patterns. This article was corrected on 29 SEP 2014. See the end of the full text for details.

  4. Modeling Heterogeneous Variance-Covariance Components in Two-Level Models

    ERIC Educational Resources Information Center

    Leckie, George; French, Robert; Charlton, Chris; Browne, William

    2014-01-01

    Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…

  5. Measurement Model Specification Error in LISREL Structural Equation Models.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  6. Radiation Models

    ERIC Educational Resources Information Center

    James, W. G. G.

    1970-01-01

    Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)

  7. Agent-based modeling: case study in cleavage furrow models.

    PubMed

    Mogilner, Alex; Manhart, Angelika

    2016-11-07

    The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as "differential equation based" (DE) or "agent based" (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem-positioning of the cleavage furrow in dividing cells-to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. © 2016 Mogilner and Manhart. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  8. Communication system modeling

    NASA Technical Reports Server (NTRS)

    Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.

    1971-01-01

    This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.

  9. Modeling and control design of a wind tunnel model support

    NASA Technical Reports Server (NTRS)

    Howe, David A.

    1990-01-01

    The 12-Foot Pressure Wind Tunnel at Ames Research Center is being restored. A major part of the restoration is the complete redesign of the aircraft model supports and their associated control systems. An accurate trajectory control servo system capable of positioning a model (with no measurable overshoot) is needed. Extremely small errors in scaled-model pitch angle can increase airline fuel costs for the final aircraft configuration by millions of dollars. In order to make a mechanism sufficiently accurate in pitch, a detailed structural and control-system model must be created and then simulated on a digital computer. The model must contain linear representations of the mechanical system, including masses, springs, and damping in order to determine system modes. Electrical components, both analog and digital, linear and nonlinear must also be simulated. The model of the entire closed-loop system must then be tuned to control the modes of the flexible model-support structure. The development of a system model, the control modal analysis, and the control-system design are discussed.

  10. Parametric regression model for survival data: Weibull regression model as an example

    PubMed Central

    2016-01-01

    Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846

  11. Foraminifera Models to Interrogate Ostensible Proxy-Model Discrepancies During Late Pliocene

    NASA Astrophysics Data System (ADS)

    Jacobs, P.; Dowsett, H. J.; de Mutsert, K.

    2017-12-01

    Planktic foraminifera faunal assemblages have been used in the reconstruction of past oceanic states (e.g. the Last Glacial Maximum, the mid-Piacenzian Warm Period). However these reconstruction efforts have typically relied on inverse modeling using transfer functions or the modern analog technique, which by design seek to translate foraminifera into one or two target oceanic variables, primarily sea surface temperature (SST). These reconstructed SST data have then been used to test the performance of climate models, and discrepancies have been attributed to shortcomings in climate model processes and/or boundary conditions. More recently forward proxy models or proxy system models have been used to leverage the multivariate nature of proxy relationships to their environment, and to "bring models into proxy space". Here we construct ecological models of key planktic foraminifera taxa, calibrated and validated with World Ocean Atlas (WO13) oceanographic data. Multiple modeling methods (e.g. multilayer perceptron neural networks, Mahalanobis distance, logistic regression, and maximum entropy) are investigated to ensure robust results. The resulting models are then driven by a Late Pliocene climate model simulation with biogeochemical as well as temperature variables. Similarities and differences with previous model-proxy comparisons (e.g. PlioMIP) are discussed.

  12. Truth, models, model sets, AIC, and multimodel inference: a Bayesian perspective

    USGS Publications Warehouse

    Barker, Richard J.; Link, William A.

    2015-01-01

    Statistical inference begins with viewing data as realizations of stochastic processes. Mathematical models provide partial descriptions of these processes; inference is the process of using the data to obtain a more complete description of the stochastic processes. Wildlife and ecological scientists have become increasingly concerned with the conditional nature of model-based inference: what if the model is wrong? Over the last 2 decades, Akaike's Information Criterion (AIC) has been widely and increasingly used in wildlife statistics for 2 related purposes, first for model choice and second to quantify model uncertainty. We argue that for the second of these purposes, the Bayesian paradigm provides the natural framework for describing uncertainty associated with model choice and provides the most easily communicated basis for model weighting. Moreover, Bayesian arguments provide the sole justification for interpreting model weights (including AIC weights) as coherent (mathematically self consistent) model probabilities. This interpretation requires treating the model as an exact description of the data-generating mechanism. We discuss the implications of this assumption, and conclude that more emphasis is needed on model checking to provide confidence in the quality of inference.

  13. Pseudo-Boltzmann model for modeling the junctionless transistors

    NASA Astrophysics Data System (ADS)

    Avila-Herrera, F.; Cerdeira, A.; Roldan, J. B.; Sánchez-Moreno, P.; Tienda-Luna, I. M.; Iñiguez, B.

    2014-05-01

    Calculation of the carrier concentrations in semiconductors using the Fermi-Dirac integral requires complex numerical calculations; in this context, practically all analytical device models are based on Boltzmann statistics, even though it is known that it leads to an over-estimation of carriers densities for high doping concentrations. In this paper, a new approximation to Fermi-Dirac integral, called Pseudo-Boltzmann model, is presented for modeling junctionless transistors with high doping concentrations.

  14. Nonlinear Modeling by Assembling Piecewise Linear Models

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  15. Model selection for multi-component frailty models.

    PubMed

    Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert

    2007-11-20

    Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.

  16. Aerosol Modeling for the Global Model Initiative

    NASA Technical Reports Server (NTRS)

    Weisenstein, Debra K.; Ko, Malcolm K. W.

    2001-01-01

    The goal of this project is to develop an aerosol module to be used within the framework of the Global Modeling Initiative (GMI). The model development work will be preformed jointly by the University of Michigan and AER, using existing aerosol models at the two institutions as starting points. The GMI aerosol model will be tested, evaluated against observations, and then applied to assessment of the effects of aircraft sulfur emissions as needed by the NASA Subsonic Assessment in 2001. The work includes the following tasks: 1. Implementation of the sulfur cycle within GMI, including sources, sinks, and aqueous conversion of sulfur. Aerosol modules will be added as they are developed and the GMI schedule permits. 2. Addition of aerosol types other than sulfate particles, including dust, soot, organic carbon, and black carbon. 3. Development of new and more efficient parameterizations for treating sulfate aerosol nucleation, condensation, and coagulation among different particle sizes and types.

  17. Respectful Modeling: Addressing Uncertainty in Dynamic System Models for Molecular Biology.

    PubMed

    Tsigkinopoulou, Areti; Baker, Syed Murtuza; Breitling, Rainer

    2017-06-01

    Although there is still some skepticism in the biological community regarding the value and significance of quantitative computational modeling, important steps are continually being taken to enhance its accessibility and predictive power. We view these developments as essential components of an emerging 'respectful modeling' framework which has two key aims: (i) respecting the models themselves and facilitating the reproduction and update of modeling results by other scientists, and (ii) respecting the predictions of the models and rigorously quantifying the confidence associated with the modeling results. This respectful attitude will guide the design of higher-quality models and facilitate the use of models in modern applications such as engineering and manipulating microbial metabolism by synthetic biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. CMIP5 Historical Simulations (1850-2012) with GISS ModelE2

    NASA Technical Reports Server (NTRS)

    Miller, Ronald Lindsay; Schmidt, Gavin A.; Nazarenko, Larissa S.; Tausnev, Nick; Bauer, Susanne E.; DelGenio, Anthony D.; Kelley, Max; Lo, Ken K.; Ruedy, Reto; Shindell, Drew T.; hide

    2014-01-01

    Observations of climate change during the CMIP5 extended historical period (1850-2012) are compared to trends simulated by six versions of the NASA Goddard Institute for Space Studies ModelE2 Earth System Model. The six models are constructed from three versions of the ModelE2 atmospheric general circulation model, distinguished by their treatment of atmospheric composition and the aerosol indirect effect, combined with two ocean general circulation models, HYCOM and Russell. Forcings that perturb the model climate during the historical period are described. Five-member ensemble averages from each of the six versions of ModelE2 simulate trends of surface air temperature, atmospheric temperature, sea ice and ocean heat content that are in general agreement with observed trends, although simulated warming is slightly excessive within the past decade. Only simulations that include increasing concentrations of long-lived greenhouse gases match the warming observed during the twentieth century. Differences in twentieth-century warming among the six model versions can be attributed to differences in climate sensitivity, aerosol and ozone forcing, and heat uptake by the deep ocean. Coupled models with HYCOM export less heat to the deep ocean, associated with reduced surface warming in regions of deepwater formation, but greater warming elsewhere at high latitudes along with reduced sea ice. All ensembles show twentieth-century annular trends toward reduced surface pressure at southern high latitudes and a poleward shift of the midlatitude westerlies, consistent with observations.

  19. Building generic anatomical models using virtual model cutting and iterative registration.

    PubMed

    Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W

    2010-02-08

    Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java

  20. Human Exposure Modeling - Databases to Support Exposure Modeling

    EPA Pesticide Factsheets

    Human exposure modeling relates pollutant concentrations in the larger environmental media to pollutant concentrations in the immediate exposure media. The models described here are available on other EPA websites.

  1. A prototype computer-aided modelling tool for life-support system models

    NASA Technical Reports Server (NTRS)

    Preisig, H. A.; Lee, Tae-Yeong; Little, Frank

    1990-01-01

    Based on the canonical decomposition of physical-chemical-biological systems, a prototype kernel has been developed to efficiently model alternative life-support systems. It supports (1) the work in an interdisciplinary group through an easy-to-use mostly graphical interface, (2) modularized object-oriented model representation, (3) reuse of models, (4) inheritance of structures from model object to model object, and (5) model data base. The kernel is implemented in Modula-II and presently operates on an IBM PC.

  2. Coupling population dynamics with earth system models: the POPEM model.

    PubMed

    Navarro, Andrés; Moreno, Raúl; Jiménez-Alcázar, Alfonso; Tapiador, Francisco J

    2017-09-16

    Precise modeling of CO 2 emissions is important for environmental research. This paper presents a new model of human population dynamics that can be embedded into ESMs (Earth System Models) to improve climate modeling. Through a system dynamics approach, we develop a cohort-component model that successfully simulates historical population dynamics with fine spatial resolution (about 1°×1°). The population projections are used to improve the estimates of CO 2 emissions, thus transcending the bulk approach of existing models and allowing more realistic non-linear effects to feature in the simulations. The module, dubbed POPEM (from Population Parameterization for Earth Models), is compared with current emission inventories and validated against UN aggregated data. Finally, it is shown that the module can be used to advance toward fully coupling the social and natural components of the Earth system, an emerging research path for environmental science and pollution research.

  3. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    PubMed

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  4. Models for nearly every occasion: Part I - One box models.

    PubMed

    Hewett, Paul; Ganser, Gary H

    2017-01-01

    The standard "well mixed room," "one box" model cannot be used to predict occupational exposures whenever the scenario involves the use of local controls. New "constant emission" one box models are proposed that permit either local exhaust or local exhaust with filtered return, coupled with general room ventilation or the recirculation of a portion of the general room exhaust. New "two box" models are presented in Part II of this series. Both steady state and transient models were developed. The steady state equation for each model, including the standard one box steady state model, is augmented with an additional factor reflecting the fraction of time the substance was generated during each task. This addition allows the easy calculation of the average exposure for cyclic and irregular emission patterns, provided the starting and ending concentrations are zero or near zero, or the cumulative time across all tasks is long (e.g., several tasks to a full shift). The new models introduce additional variables, such as the efficiency of the local exhaust to immediately capture freshly generated contaminant and the filtration efficiency whenever filtered exhaust is returned to the workspace. Many of the model variables are knowable (e.g., room volume and ventilation rate). A structured procedure for calibrating a model to a work scenario is introduced that can be applied to both continuous and cyclic processes. The "calibration" procedure generates estimates of the generation rate and all of remaining unknown model variables.

  5. The reservoir model: a differential equation model of psychological regulation.

    PubMed

    Deboeck, Pascal R; Bergeman, C S

    2013-06-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might "add up" over time (e.g., life stressors, inputs), but individuals simultaneously take action to "blow off steam" (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the "height" (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  6. The Reservoir Model: A Differential Equation Model of Psychological Regulation

    PubMed Central

    Deboeck, Pascal R.; Bergeman, C. S.

    2017-01-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might “add up” over time (e.g., life stressors, inputs), but individuals simultaneously take action to “blow off steam” (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the “height” (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. PMID:23527605

  7. Contam airflow models of three large buildings: Model descriptions and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, Douglas R.; Price, Phillip N.

    2009-09-30

    Airflow and pollutant transport models are useful for several reasons, including protection from or response to biological terrorism. In recent years they have been used for deciding how many biological agent samplers are needed in a given building to detect the release of an agent; to figure out where those samplers should be located; to predict the number of people at risk in the event of a release of a given size and location; to devise response strategies in the event of a release; to determine optimal trade-offs between sampler characteristics (such as detection limit and response time); and somore » on. For some of these purposes it is necessary to model a specific building of interest: if you are trying to determine optimal sampling locations, you must have a model of your building and not some different building. But for many purposes generic or 'prototypical' building models would suffice. For example, for determining trade-offs between sampler characteristics, results from one building will carry over other, similar buildings. Prototypical building models are also useful for comparing or testing different algorithms or computational pproaches: different researchers can use the same models, thus allowing direct comparison of results in a way that is not otherwise possible. This document discusses prototypical building models developed by the Airflow and Pollutant Transport Group at Lawrence Berkeley National Laboratory. The models are implemented in the Contam v2.4c modeling program, available from the National Institutes for Standards and Technology. We present Contam airflow models of three virtual buildings: a convention center, an airport terminal, and a multi-story office building. All of the models are based to some extent on specific real buildings. Our goal is to produce models that are realistic, in terms of approximate magnitudes, directions, and speeds of airflow and pollutant transport. The three models vary substantially in detail. The

  8. Differential Topic Models.

    PubMed

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  9. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  10. Modeling Ability Differentiation in the Second-Order Factor Model

    ERIC Educational Resources Information Center

    Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…

  11. Data-Model and Inter-Model Comparisons of the GEM Outflow Events Using the Space Weather Modeling Framework

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Eccles, J. V.; Barakat, A. R.; Kistler, L. M.; Haaland, S.; Schunk, R. W.; Chappell, C. R.

    2015-12-01

    Two storm periods were selected by the Geospace Environment Modeling Ionospheric Outflow focus group for community collaborative study because of its high magnetospheric activity and extensive data coverage: the September 27 - October 4, 2002 corotating interaction region event and the October 22 - 29 coronal mass ejection event. During both events, the FAST, Polar, Cluster, and other missions made key observations, creating prime periods for data-model comparison. The GEM community has come together to simulate this period using many different methods in order to evaluate models, compare results, and expand our knowledge of ionospheric outflow and its effects on global dynamics. This paper presents Space Weather Modeling Framework (SWMF) simulations of these important periods compared against observations from the Polar TIDE, Cluster CODIF and EFW instruments. Emphasis will be given to the second event. Density and velocity of oxygen and hydrogen throughout the lobes, plasma sheet, and inner magnetosphere will be the focus of these comparisons. For these simulations, the SWMF couples the multifluid version of BATS-R-US MHD to a variety of ionospheric outflow models of varying complexity. The simplest is outflow arising from constant MHD inner boundary conditions. Two first-principles-based models are also leveraged: the Polar Wind Outflow Model (PWOM), a fluid treatment of outflow dynamics, and the Generalized Polar Wind (GPW) model, which combines fluid and particle-in-cell approaches. Each model is capable of capturing a different set of energization mechanisms, yielding different outflow results. The data-model comparisons will illustrate how well each approach captures reality and which energization mechanisms are most important. Inter-model comparisons will illustrate how the different outflow specifications affect the magnetosphere. Specifically, it is found that the GPW provides increased heavy ion outflow over a broader spatial range than the alternative

  12. The lagRST Model: A Turbulence Model for Non-Equilibrium Flows

    NASA Technical Reports Server (NTRS)

    Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.

    2011-01-01

    This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.

  13. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    NASA Astrophysics Data System (ADS)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  14. Hierarchical spatial capture-recapture models: Modeling population density from stratified populations

    USGS Publications Warehouse

    Royle, J. Andrew; Converse, Sarah J.

    2014-01-01

    Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.

  15. Validation of community models: 3. Tracing field lines in heliospheric models

    NASA Astrophysics Data System (ADS)

    MacNeice, Peter; Elliott, Brian; Acebal, Ariel

    2011-10-01

    Forecasting hazardous gradual solar energetic particle (SEP) bursts at Earth requires accurately modeling field line connections between Earth and the locations of coronal or interplanetary shocks that accelerate the particles. We test the accuracy of field lines reconstructed using four different models of the ambient coronal and inner heliospheric magnetic field, through which these shocks must propagate, including the coupled Wang-Sheeley-Arge (WSA)/ENLIL model. Evaluating the WSA/ENLIL model performance is important since it is the most sophisticated model currently available to space weather forecasters which can model interplanetary coronal mass ejections and, when coupled with particle acceleration and transport models, will provide a complete model for gradual SEP bursts. Previous studies using a simpler Archimedean spiral approach above 2.5 solar radii have reported poor performance. We test the accuracy of the model field lines connecting Earth to the Sun at the onset times of 15 impulsive SEP bursts, comparing the foot points of these field lines with the locations of surface events believed to be responsible for the SEP bursts. We find the WSA/ENLIL model performance is no better than the simplest spiral model, and the principal source of error is the model's inability to reproduce sufficient low-latitude open flux. This may be due to the model's use of static synoptic magnetograms, which fail to account for transient activity in the low corona, during which reconnection events believed to initiate the SEP acceleration may contribute short-lived open flux at low latitudes. Time-dependent coronal models incorporating these transient events may be needed to significantly improve Earth/Sun field line forecasting.

  16. Bio-Inspired Neural Model for Learning Dynamic Models

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  17. Formalized landscape models for surveying and modelling tasks

    NASA Astrophysics Data System (ADS)

    Löwner, Marc-Oliver

    2010-05-01

    We present a formalization of main geomorphic landscape models, mainly the concept of slopes, to clarify the needs and potentials of surveying technologies and modelling approaches. Using the Unified Modelling Language (UML) it is implemented as a exchangeable Geography Markup Language (GML3) -based application schema and therefore supports shared measurement campaigns. Today, knowledge in Geomorphology is given synoptically in textbooks in a more or less lyrical way. This knowledge is hard to implement for the use of modelling algorithms or data storage and sharing questions. On the other hand physical based numerical modelling and high resolution surveying technologies enable us to investigate case scenarios within small scales. Bringing together such approaches and organizing our data in an appropriate way will need the formalization of the concepts and knowledge that is archived in the science of geomorphology. The main problem of comparing research results in geomorphology but is that the objects under investigation are composed of 3-dimensional geometries that change in time due to processes of material fluxes, e. g. soil erosion or mass movements. They have internal properties, e. g. soil texture or bulk density, that determine the effectiveness of these processes but are under change as well. The presented application schema is available on the Internet and therefore a first step to enable researchers to share information using an OGC's Web feature service. In this vein comparing modelling results of landscape evolution with results of other scientist's observations is possible. Compared to prevalent data concepts the model presented makes it possible to store information about landforms, their geometry and the characteristics in more detail. It allows to represent the 3D-geometry, the set of material properties and the genesis of a landform by associating processes to a geoobject. Thus, time slices of a geomorphic system can be represented as well as

  18. JEDI Geothermal Model | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Geothermal Model JEDI Geothermal Model The Jobs and Economic Development Impacts (JEDI) Geothermal Model allows users to estimate economic development impacts from geothermal projects and includes

  19. Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter

    2016-01-01

    This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.

  20. Applications of the k – ω Model in Stellar Evolutionary Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yan, E-mail: ly@ynao.ac.cn

    The k – ω model for turbulence was first proposed by Kolmogorov. A new k – ω model for stellar convection was developed by Li, which could reasonably describe turbulent convection not only in the convectively unstable zone, but also in the overshooting regions. We revised the k – ω model by improving several model assumptions (including the macro-length of turbulence, convective heat flux, and turbulent mixing diffusivity, etc.), making it applicable not only for convective envelopes, but also for convective cores. Eight parameters are introduced in the revised k – ω model. It should be noted that the Reynoldsmore » stress (turbulent pressure) is neglected in the equation of hydrostatic support. We applied it into solar models and 5 M {sub ⊙} stellar models to calibrate the eight model parameters, as well as to investigate the effects of the convective overshooting on the Sun and intermediate mass stellar models.« less

  1. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    NASA Technical Reports Server (NTRS)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  2. JEDI Biofuels Models | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Biofuels Models JEDI Biofuels Models The Jobs and Economic Development Impacts (JEDI) biofuel models allow users to estimate economic development impacts from biofuel projects and include default

  3. JEDI Petroleum Model | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Petroleum Model JEDI Petroleum Model The Jobs and Economic Development Impacts (JEDI) Petroleum Model allows users to estimate economic development impacts from petroleum projects and includes default

  4. The Disk Instability Model for SU UMa systems - a Comparison of the Thermal-Tidal Model and Plain Vanilla Model

    NASA Astrophysics Data System (ADS)

    Cannizzo, John K.

    2017-01-01

    We utilize the time dependent accretion disk model described by Ichikawa & Osaki (1992) to explore two basic ideas for the outbursts in the SU UMa systems, Osaki's Thermal-Tidal Model, and the basic accretion disk limit cycle model. We explore a range in possible input parameters and model assumptions to delineate under what conditions each model may be preferred.

  5. Organic acid modeling and model validation: Workshop summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferencesmore » of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.« less

  6. Modeling of near wall turbulence and modeling of bypass transition

    NASA Technical Reports Server (NTRS)

    Yang, Z.

    1992-01-01

    The objectives for this project are as follows: (1) Modeling of the near wall turbulence: We aim to develop a second order closure for the near wall turbulence. As a first step of this project, we try to develop a kappa-epsilon model for near wall turbulence. We require the resulting model to be able to handle both near wall turbulence and turbulent flows away from the wall, computationally robust, and applicable for complex flow situations, flow with separation, for example, and (2) Modeling of the bypass transition: We aim to develop a bypass transition model which contains the effect of intermittency. Thus, the model can be used for both the transitional boundary layers and the turbulent boundary layers. We require the resulting model to give a good prediction of momentum and heat transfer within the transitional boundary and a good prediction of the effect of freestream turbulence on transitional boundary layers.

  7. Model selection and assessment for multi­-species occupancy models

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  8. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a resultmore » of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.« less

  9. Risk prediction models of breast cancer: a systematic review of model performances.

    PubMed

    Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin

    2012-05-01

    The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.

  10. International Natural Gas Model 2011, Model Documentation Report

    EIA Publications

    2013-01-01

    This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  11. Protein solubility modeling

    NASA Technical Reports Server (NTRS)

    Agena, S. M.; Pusey, M. L.; Bogle, I. D.

    1999-01-01

    A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. Copyright 1999 John Wiley & Sons, Inc.

  12. Interacting damage models mapped onto ising and percolation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toussaint, Renaud; Pride, Steven R.

    The authors introduce a class of damage models on regular lattices with isotropic interactions between the broken cells of the lattice. Quasistatic fiber bundles are an example. The interactions are assumed to be weak, in the sense that the stress perturbation from a broken cell is much smaller than the mean stress in the system. The system starts intact with a surface-energy threshold required to break any cell sampled from an uncorrelated quenched-disorder distribution. The evolution of this heterogeneous system is ruled by Griffith's principle which states that a cell breaks when the release in potential (elastic) energy in themore » system exceeds the surface-energy barrier necessary to break the cell. By direct integration over all possible realizations of the quenched disorder, they obtain the probability distribution of each damage configuration at any level of the imposed external deformation. They demonstrate an isomorphism between the distributions so obtained and standard generalized Ising models, in which the coupling constants and effective temperature in the Ising model are functions of the nature of the quenched-disorder distribution and the extent of accumulated damage. In particular, they show that damage models with global load sharing are isomorphic to standard percolation theory, that damage models with local load sharing rule are isomorphic to the standard ising model, and draw consequences thereof for the universality class and behavior of the autocorrelation length of the breakdown transitions corresponding to these models. they also treat damage models having more general power-law interactions, and classify the breakdown process as a function of the power-law interaction exponent. Last, they also show that the probability distribution over configurations is a maximum of Shannon's entropy under some specific constraints related to the energetic balance of the fracture process, which firmly relates this type of quenched-disorder based

  13. Retrofitted supersymmetric models

    NASA Astrophysics Data System (ADS)

    Bose, Manatosh

    This thesis explores several models of metastable dynamic supersymmetry breaking (MDSB) and a supersymmetric model of hybrid inflation. All of these models possess discrete R-symmetries. We specially focus on the retrofitted models for supersymmetry breaking models. At first we construct retrofitted models of gravity mediation. In these models we explore the genericity of the so-called "split supersymmetry." We show that with the simplest models, where the goldstino multiplet is neutral under the discrete R-symmetry, a split spectrum is not generic. However if the goldstino superfield is charged under some symmetry other than the R-symmetry, then a split spectrum is achievable but not generic. We also present a gravity mediated model where the fine tuning of the Z-boson mass is dictated by a discrete choice rather than a continuous tuning. Then we construct retrofitted models of gauge mediated SUSY breaking. We show that, in these models, if the approximate R-symmetry of the theory is spontaneously broken, the messenger scale is fixed; if explicitly broken by retrofitted couplings, a very small dimensionless number is required; if supergravity corrections are responsible for the symmetry breaking, at least two moderately small couplings are required, and that there is a large range of possible messenger scales. Finally we switch our attention to small field hybrid inflation. We construct a model that yields a spectral index ns = 0.96. Here, we also briefly discuss the possibility of relating the scale of inflation with the dynamics responsible for supersymmetry breaking.

  14. Modeling North Atlantic Nor'easters With Modern Wave Forecast Models

    NASA Astrophysics Data System (ADS)

    Perrie, Will; Toulany, Bechara; Roland, Aron; Dutour-Sikiric, Mathieu; Chen, Changsheng; Beardsley, Robert C.; Qi, Jianhua; Hu, Yongcun; Casey, Michael P.; Shen, Hui

    2018-01-01

    Three state-of-the-art operational wave forecast model systems are implemented on fine-resolution grids for the Northwest Atlantic. These models are: (1) a composite model system consisting of SWAN implemented within WAVEWATCHIII® (the latter is hereafter, WW3) on a nested system of traditional structured grids, (2) an unstructured grid finite-volume wave model denoted "SWAVE," using SWAN physics, and (3) an unstructured grid finite element wind wave model denoted as "WWM" (for "wind wave model") which uses WW3 physics. Models are implemented on grid systems that include relatively large domains to capture the wave energy generated by the storms, as well as including fine-resolution nearshore regions of the southern Gulf of Maine with resolution on the scale of 25 m to simulate areas where inundation and coastal damage have occurred, due to the storms. Storm cases include three intense midlatitude cases: a spring Nor'easter storm in May 2005, the Patriot's Day storm in 2007, and the Boxing Day storm in 2010. Although these wave model systems have comparable overall properties in terms of their performance and skill, it is found that there are differences. Models that use more advanced physics, as presented in recent versions of WW3, tuned to regional characteristics, as in the Gulf of Maine and the Northwest Atlantic, can give enhanced results.

  15. A stochastic Iwan-type model for joint behavior variability modeling

    NASA Astrophysics Data System (ADS)

    Mignolet, Marc P.; Song, Pengchao; Wang, X. Q.

    2015-08-01

    This paper focuses overall on the development and validation of a stochastic model to describe the dissipation and stiffness properties of a bolted joint for which experimental data is available and exhibits a large scatter. An extension of the deterministic parallel-series Iwan model for the characterization of the force-displacement behavior of joints is first carried out. This new model involves dynamic and static coefficients of friction differing from each other and a broadly defined distribution of Jenkins elements. Its applicability is next investigated using the experimental data, i.e. stiffness and dissipation measurements obtained in harmonic testing of 9 nominally identical bolted joints. The model is found to provide a very good fit of the experimental data for each bolted joint notwithstanding the significant variability of their behavior. This finding suggests that this variability can be simulated through the randomization of only the parameters of the proposed Iwan-type model. The distribution of these parameters is next selected based on maximum entropy concepts and their corresponding parameters, i.e. the hyperparameters of the model, are identified using a maximum likelihood strategy. Proceeding with a Monte Carlo simulation of this stochastic Iwan model demonstrates that the experimental data fits well within the uncertainty band corresponding to the 5th and 95th percentiles of the model predictions which well supports the adequacy of the modeling effort.

  16. EFFICIENT MODEL-FITTING AND MODEL-COMPARISON FOR HIGH-DIMENSIONAL BAYESIAN GEOSTATISTICAL MODELS. (R826887)

    EPA Science Inventory

    Geostatistical models are appropriate for spatially distributed data measured at irregularly spaced locations. We propose an efficient Markov chain Monte Carlo (MCMC) algorithm for fitting Bayesian geostatistical models with substantial numbers of unknown parameters to sizable...

  17. Model Fusion Tool - the Open Environmental Modelling Platform Concept

    NASA Astrophysics Data System (ADS)

    Kessler, H.; Giles, J. R.

    2010-12-01

    The vision of an Open Environmental Modelling Platform - seamlessly linking geoscience data, concepts and models to aid decision making in times of environmental change. Governments and their executive agencies across the world are facing increasing pressure to make decisions about the management of resources in light of population growth and environmental change. In the UK for example, groundwater is becoming a scarce resource for large parts of its most densely populated areas. At the same time river and groundwater flooding resulting from high rainfall events are increasing in scale and frequency and sea level rise is threatening the defences of coastal cities. There is also a need for affordable housing, improved transport infrastructure and waste disposal as well as sources of renewable energy and sustainable food production. These challenges can only be resolved if solutions are based on sound scientific evidence. Although we have knowledge and understanding of many individual processes in the natural sciences it is clear that a single science discipline is unable to answer the questions and their inter-relationships. Modern science increasingly employs computer models to simulate the natural, economic and human system. Management and planning requires scenario modelling, forecasts and ‘predictions’. Although the outputs are often impressive in terms of apparent accuracy and visualisation, they are inherently not suited to simulate the response to feedbacks from other models of the earth system, such as the impact of human actions. Geological Survey Organisations (GSO) are increasingly employing advances in Information Technology to visualise and improve their understanding of geological systems. Instead of 2 dimensional paper maps and reports many GSOs now produce 3 dimensional geological framework models and groundwater flow models as their standard output. Additionally the British Geological Survey have developed standard routines to link geological

  18. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    PubMed

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  19. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    NASA Astrophysics Data System (ADS)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  20. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  1. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  2. Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.

    PubMed

    Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward

    2015-04-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed. © 2014 SETAC.

  3. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems

    PubMed Central

    2016-01-01

    Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of “ODEs and formalized flow diagrams” as OFFL.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler’s behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features. PMID:27270918

  4. JEDI Coal Model | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Coal Model JEDI Coal Model The Jobs and Economic Development Impacts (JEDI) Coal Model allow users to estimate economic development impacts from coal projects and includes default information that can

  5. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  6. Modeling Instruction: An Effective Model for Science Education

    ERIC Educational Resources Information Center

    Jackson, Jane; Dukerich, Larry; Hestenes, David

    2008-01-01

    The authors describe a Modeling Instruction program that places an emphasis on the construction and application of conceptual models of physical phenomena as a central aspect of learning and doing science. (Contains 1 table.)

  7. mRM - multiscale Routing Model for Land Surface and Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Cuntz, M.; Thober, S.; Mai, J.; Samaniego, L. E.; Gochis, D. J.; Kumar, R.

    2015-12-01

    Routing streamflow through a river network is a basic step within any distributed hydrologic model. It integrates the generated runoff and allows comparison with observed discharge at the outlet of a catchment. The Muskingum routing is a textbook river routing scheme that has been implemented in Earth System Models (e.g., WRF-HYDRO), stand-alone routing schemes (e.g., RAPID), and hydrologic models (e.g., the mesoscale Hydrologic Model). Most implementations suffer from a high computational demand because the spatial routing resolution is fixed to that of the elevation model irrespective of the hydrologic modeling resolution. This is because the model parameters are scale-dependent and cannot be used at other resolutions without re-estimation. Here, we present the multiscale Routing Model (mRM) that allows for a flexible choice of the routing resolution. mRM exploits the Multiscale Parameter Regionalization (MPR) included in the open-source mesoscale Hydrologic Model (mHM, www.ufz.de/mhm) that relates model parameters to physiographic properties and allows to estimate scale-independent model parameters. mRM is currently coupled to mHM and is presented here as stand-alone Free and Open Source Software (FOSS). The mRM source code is highly modular and provides a subroutine for internal re-use in any land surface scheme. mRM is coupled in this work to the state-of-the-art land surface model Noah-MP. Simulation results using mRM are compared with those available in WRF-HYDRO for the Red River during the period 1990-2000. mRM allows to increase the routing resolution from 100m to more than 10km without deteriorating the model performance. Therefore, it speeds up model calculation by reducing the contribution of routing to total runtime from over 80% to less than 5% in the case of WRF-HYDRO. mRM thus makes discharge data available to land surface modeling with only little extra calculations.

  8. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  9. Models in Science Education: Applications of Models in Learning and Teaching Science

    ERIC Educational Resources Information Center

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  10. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    USGS Publications Warehouse

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  11. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DOE PAGES

    King, Zachary A.; Lu, Justin; Drager, Andreas; ...

    2015-10-17

    In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less

  12. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    PubMed Central

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  13. Standard solar model

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Kim, Y.-C.; Pinsonneault, M. H.

    1992-01-01

    A set of solar models have been constructed, each based on a single modification to the physics of a reference solar model. In addition, a model combining several of the improvements has been calculated to provide a best solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The impact on both the structure and the frequencies of the low-l p-modes of the model to these improvements are discussed. It is found that the combined solar model, which is based on the best physics available (and does not contain any ad hoc assumptions), reproduces the observed oscillation spectrum (for low-l) within the errors associated with the uncertainties in the model physics (primarily opacities).

  14. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  15. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  16. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of

  17. Mixed models, linear dependency, and identification in age-period-cohort models.

    PubMed

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Space Weather Modeling at the Community Coordinated Modeling Center

    NASA Astrophysics Data System (ADS)

    Hesse, M.; Falasca, A.; Johnson, J.; Keller, K.; Kuznetsova, M.; Rastaetter, L.

    2003-04-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership aimed at the creation of next generation space weather models. The goal of the CCMC is to support the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to provide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires close collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of NASA's Living With a Star (LWS) initiative, of the National Space Weather Program Implementation Plan, and of the Department of Defense Space Weather Transition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the US Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and development accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate. We will demonstrate the capabilities of models resident at CCMC via the analysis of a geomagnetic storm, driven by a shock in the solar wind.

  19. Space Weather Modeling at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Hesse M.

    2005-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership, which aims at the creation of next generation space weather models. The goal of the CCMC is to support the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to provide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires dose collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of the National Space Weather Program Implementation Plan, of NASA's Living With a Star (LWS) initiative, and of the Department of Defense Space Weather Transition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the US Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and development accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate. Special emphasis will be on solar and heliospheric models currently residing at CCMC, and on plans for validation and verification.

  20. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  1. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE PAGES

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...

    2016-05-01

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  2. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.

  3. Accounting for uncertainty in health economic decision models by using model averaging

    PubMed Central

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-01-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329

  4. Building Thermal Models

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.

    2017-01-01

    This presentation is meant to be an overview of the model building process It is based on typical techniques (Monte Carlo Ray Tracing for radiation exchange, Lumped Parameter, Finite Difference for thermal solution) used by the aerospace industry This is not intended to be a "How to Use ThermalDesktop" course. It is intended to be a "How to Build Thermal Models" course and the techniques will be demonstrated using the capabilities of ThermalDesktop (TD). Other codes may or may not have similar capabilities. The General Model Building Process can be broken into four top level steps: 1. Build Model; 2. Check Model; 3. Execute Model; 4. Verify Results.

  5. The Instrumental Model

    ERIC Educational Resources Information Center

    Yeates, Devin Rodney

    2011-01-01

    The goal of this dissertation is to enable better predictive models by engaging raw experimental data through the Instrumental Model. The Instrumental Model captures the protocols and procedures of experimental data analysis. The approach is formalized by encoding the Instrumental Model in an XML record. Decoupling the raw experimental data from…

  6. Qualitative Student Models.

    ERIC Educational Resources Information Center

    Clancey, William J.

    The concept of a qualitative model is used as the focus of this review of qualitative student models in order to compare alternative computational models and to contrast domain requirements. The report is divided into eight sections: (1) Origins and Goals (adaptive instruction, qualitative models of processes, components of an artificial…

  7. Automated finite element modeling of the lumbar spine: Using a statistical shape model to generate a virtual population of models.

    PubMed

    Campbell, J Q; Petrella, A J

    2016-09-06

    Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Modeling the QBO—Improvements resulting from higher‐model vertical resolution

    PubMed Central

    Zhou, Tiehan; Shindell, D.; Ruedy, R.; Aleinov, I.; Nazarenko, L.; Tausnev, N. L.; Kelley, M.; Sun, S.; Cheng, Y.; Field, R. D.; Faluvegi, G.

    2016-01-01

    Abstract Using the NASA Goddard Institute for Space Studies (GISS) climate model, it is shown that with proper choice of the gravity wave momentum flux entering the stratosphere and relatively fine vertical layering of at least 500 m in the upper troposphere‐lower stratosphere (UTLS), a realistic stratospheric quasi‐biennial oscillation (QBO) is modeled with the proper period, amplitude, and structure down to tropopause levels. It is furthermore shown that the specified gravity wave momentum flux controls the QBO period whereas the width of the gravity wave momentum flux phase speed spectrum controls the QBO amplitude. Fine vertical layering is required for the proper downward extension to tropopause levels as this permits wave‐mean flow interactions in the UTLS region to be resolved in the model. When vertical resolution is increased from 1000 to 500 m, the modeled QBO modulation of the tropical tropopause temperatures increasingly approach that from observations, and the “tape recorder” of stratospheric water vapor also approaches the observed. The transport characteristics of our GISS models are assessed using age‐of‐air and N2O diagnostics, and it is shown that some of the deficiencies in model transport that have been noted in previous GISS models are greatly improved for all of our tested model vertical resolutions. More realistic tropical‐extratropical transport isolation, commonly referred to as the “tropical pipe,” results from the finer vertical model layering required to generate a realistic QBO. PMID:27917258

  9. Modeling the QBO-Improvements resulting from higher-model vertical resolution.

    PubMed

    Geller, Marvin A; Zhou, Tiehan; Shindell, D; Ruedy, R; Aleinov, I; Nazarenko, L; Tausnev, N L; Kelley, M; Sun, S; Cheng, Y; Field, R D; Faluvegi, G

    2016-09-01

    Using the NASA Goddard Institute for Space Studies (GISS) climate model, it is shown that with proper choice of the gravity wave momentum flux entering the stratosphere and relatively fine vertical layering of at least 500 m in the upper troposphere-lower stratosphere (UTLS), a realistic stratospheric quasi-biennial oscillation (QBO) is modeled with the proper period, amplitude, and structure down to tropopause levels. It is furthermore shown that the specified gravity wave momentum flux controls the QBO period whereas the width of the gravity wave momentum flux phase speed spectrum controls the QBO amplitude. Fine vertical layering is required for the proper downward extension to tropopause levels as this permits wave-mean flow interactions in the UTLS region to be resolved in the model. When vertical resolution is increased from 1000 to 500 m, the modeled QBO modulation of the tropical tropopause temperatures increasingly approach that from observations, and the "tape recorder" of stratospheric water vapor also approaches the observed. The transport characteristics of our GISS models are assessed using age-of-air and N 2 O diagnostics, and it is shown that some of the deficiencies in model transport that have been noted in previous GISS models are greatly improved for all of our tested model vertical resolutions. More realistic tropical-extratropical transport isolation, commonly referred to as the "tropical pipe," results from the finer vertical model layering required to generate a realistic QBO.

  10. Model based design introduction: modeling game controllers to microprocessor architectures

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  11. Translation from UML to Markov Model: A Performance Modeling Framework

    NASA Astrophysics Data System (ADS)

    Khan, Razib Hayat; Heegaard, Poul E.

    Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.

  12. Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models.

    PubMed

    Liu, Ziyue; Cappola, Anne R; Crofford, Leslie J; Guo, Wensheng

    2014-01-01

    The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls.

  13. Rate of Learning Models, Mental Models, and Item Response Theory

    NASA Astrophysics Data System (ADS)

    Pritchard, David E.; Lee, Y.; Bao, L.

    2006-12-01

    We present three learning models that make different assumptions about how the rate of a student's learning depends on the amount that they know already. These are motivated by the mental models of Tabula Rasa, Constructivist, and Tutoring theories. These models predict the postscore for a given prescore after a given period of instruction. Constructivist models show a close connection with Item Response Theory. Comparison with data from both Hake and MIT shows that the Tabula Rasa models not only fit incomparably better, but fit the MIT data within error across a wide range of pretest scores. We discuss the implications of this finding.

  14. Comparison of LiST measles mortality model and WHO/IVB measles model.

    PubMed

    Chen, Wei-Ju

    2011-04-13

    The Lives Saved Tool (LiST) has been developed to estimate the impact of health interventions and can consider multiple interventions simultaneously. Given its increasing usage by donor organizations and national program planner, we compare the LiST measles model to the widely used World Health Organization's Department of Immunization, Vaccines and Biologicals (WHO/IVB) measles model which is used to produce estimates serving as a major indicator of monitoring country measles epidemics and the progress of measles control. We analyzed the WHO/IVB models and the LiST measles model and identified components and assumptions held in each model. We contrasted the important components, and compared results from the two models by applying historical measles containing vaccine (MCV) coverages and the default values of all parameters set in the models. We also conducted analyses following a hypothetical scenario to understand how both models performed when the proportion of population protected by MCV declined to zero percent in short time period. The WHO/IVB measles model and the LiST measles model structures differ: the former is a mixed model which applies surveillance data adjusted for reporting completeness for countries with good disease surveillance system and applies a natural history model for countries with poorer disease control program and surveillance system, and the latter is a cohort model incorporating country-specific cause-of-death (CoD) profiles among children under-five. The trends of estimates of the two models are similar, but the estimates of the first year are different in most of the countries included in the analysis. The two models are comparable if we adjust the measles CoD in the LiST to produce the same baseline estimates. In addition, we used the models to estimate the potential impact of stopping using measles vaccine over a 7-year period. The WHO/IVB model produced similar estimates to the LiST model with adjusted CoD. But the LiST model

  15. Comparison of LiST measles mortality model and WHO/IVB measles model

    PubMed Central

    2011-01-01

    Background The Lives Saved Tool (LiST) has been developed to estimate the impact of health interventions and can consider multiple interventions simultaneously. Given its increasing usage by donor organizations and national program planner, we compare the LiST measles model to the widely used World Health Organization's Department of Immunization, Vaccines and Biologicals (WHO/IVB) measles model which is used to produce estimates serving as a major indicator of monitoring country measles epidemics and the progress of measles control. Methods We analyzed the WHO/IVB models and the LiST measles model and identified components and assumptions held in each model. We contrasted the important components, and compared results from the two models by applying historical measles containing vaccine (MCV) coverages and the default values of all parameters set in the models. We also conducted analyses following a hypothetical scenario to understand how both models performed when the proportion of population protected by MCV declined to zero percent in short time period. Results The WHO/IVB measles model and the LiST measles model structures differ: the former is a mixed model which applies surveillance data adjusted for reporting completeness for countries with good disease surveillance system and applies a natural history model for countries with poorer disease control program and surveillance system, and the latter is a cohort model incorporating country-specific cause-of-death (CoD) profiles among children under-five. The trends of estimates of the two models are similar, but the estimates of the first year are different in most of the countries included in the analysis. The two models are comparable if we adjust the measles CoD in the LiST to produce the same baseline estimates. In addition, we used the models to estimate the potential impact of stopping using measles vaccine over a 7-year period. The WHO/IVB model produced similar estimates to the LiST model with adjusted

  16. Functionalized anatomical models for EM-neuron Interaction modeling

    NASA Astrophysics Data System (ADS)

    Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang

    2016-06-01

    The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.

  17. Scaffolding Learning by Modelling: The Effects of Partially Worked-out Models

    ERIC Educational Resources Information Center

    Mulder, Yvonne G.; Bollen, Lars; de Jong, Ton; Lazonder, Ard W.

    2016-01-01

    Creating executable computer models is a potentially powerful approach to science learning. Learning by modelling is also challenging because students can easily get overwhelmed by the inherent complexities of the task. This study investigated whether offering partially worked-out models can facilitate students' modelling practices and promote…

  18. How can model comparison help improving species distribution models?

    PubMed

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  19. How Can Model Comparison Help Improving Species Distribution Models?

    PubMed Central

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagus sylvatica L., Quercus robur L. and Pinus sylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779

  20. Estuarine modeling: Does a higher grid resolution improve model performance?

    EPA Science Inventory

    Ecological models are useful tools to explore cause effect relationships, test hypothesis and perform management scenarios. A mathematical model, the Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to the Louisiana continental shelf of the northern ...

  1. An integrated mathematical model of the human cardiopulmonary system: model development.

    PubMed

    Albanese, Antonio; Cheng, Limei; Ursino, Mauro; Chbat, Nicolas W

    2016-04-01

    Several cardiovascular and pulmonary models have been proposed in the last few decades. However, very few have addressed the interactions between these two systems. Our group has developed an integrated cardiopulmonary model (CP Model) that mathematically describes the interactions between the cardiovascular and respiratory systems, along with their main short-term control mechanisms. The model has been compared with human and animal data taken from published literature. Due to the volume of the work, the paper is divided in two parts. The present paper is on model development and normophysiology, whereas the second is on the model's validation on hypoxic and hypercapnic conditions. The CP Model incorporates cardiovascular circulation, respiratory mechanics, tissue and alveolar gas exchange, as well as short-term neural control mechanisms acting on both the cardiovascular and the respiratory functions. The model is able to simulate physiological variables typically observed in adult humans under normal and pathological conditions and to explain the underlying mechanisms and dynamics. Copyright © 2016 the American Physiological Society.

  2. Good modeling practice guidelines for applying multimedia models in chemical assessments.

    PubMed

    Buser, Andreas M; MacLeod, Matthew; Scheringer, Martin; Mackay, Don; Bonnell, Mark; Russell, Mark H; DePinto, Joseph V; Hungerbühler, Konrad

    2012-10-01

    Multimedia mass balance models of chemical fate in the environment have been used for over 3 decades in a regulatory context to assist decision making. As these models become more comprehensive, reliable, and accepted, there is a need to recognize and adopt principles of Good Modeling Practice (GMP) to ensure that multimedia models are applied with transparency and adherence to accepted scientific principles. We propose and discuss 6 principles of GMP for applying existing multimedia models in a decision-making context, namely 1) specification of the goals of the model assessment, 2) specification of the model used, 3) specification of the input data, 4) specification of the output data, 5) conduct of a sensitivity and possibly also uncertainty analysis, and finally 6) specification of the limitations and limits of applicability of the analysis. These principles are justified and discussed with a view to enhancing the transparency and quality of model-based assessments. Copyright © 2012 SETAC.

  3. Modeling Renewable Penertration Using a Network Economic Model

    NASA Astrophysics Data System (ADS)

    Lamont, A.

    2001-03-01

    This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.

  4. Modeling and Identification for Vector Propulsion of an Unmanned Surface Vehicle: Three Degrees of Freedom Model and Response Model.

    PubMed

    Mu, Dongdong; Wang, Guofeng; Fan, Yunsheng; Sun, Xiaojie; Qiu, Bingbing

    2018-06-08

    This paper presents a complete scheme for research on the three degrees of freedom model and response model of the vector propulsion of an unmanned surface vehicle. The object of this paper is “Lanxin”, an unmanned surface vehicle (7.02 m × 2.6 m), which is equipped with a single vector propulsion device. First, the “Lanxin” unmanned surface vehicle and the related field experiments (turning test and zig-zag test) are introduced and experimental data are collected through various sensors. Then, the thrust of the vector thruster is estimated by the empirical formula method. Third, using the hypothesis and simplification, the three degrees of freedom model and the response model of USV are deduced and established, respectively. Fourth, the parameters of the models (three degrees of freedom model, response model and thruster servo model) are obtained by system identification, and we compare the simulated turning test and zig-zag test with the actual data to verify the accuracy of the identification results. Finally, the biggest advantage of this paper is that it combines theory with practice. Based on identified response model, simulation and practical course keeping experiments are carried out to further verify feasibility and correctness of modeling and identification.

  5. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    DOE PAGES

    Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu; ...

    2015-10-29

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model ismore » capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.« less

  6. Finite element modeling of a 3D coupled foot-boot model.

    PubMed

    Qiu, Tian-Xia; Teo, Ee-Chon; Yan, Ya-Bo; Lei, Wei

    2011-12-01

    Increasingly, musculoskeletal models of the human body are used as powerful tools to study biological structures. The lower limb, and in particular the foot, is of interest because it is the primary physical interaction between the body and the environment during locomotion. The goal of this paper is to adopt the finite element (FE) modeling and analysis approaches to create a state-of-the-art 3D coupled foot-boot model for future studies on biomechanical investigation of stress injury mechanism, foot wear design and parachute landing fall simulation. In the modeling process, the foot-ankle model with lower leg was developed based on Computed Tomography (CT) images using ScanIP, Surfacer and ANSYS. Then, the boot was represented by assembling the FE models of upper, insole, midsole and outsole built based on the FE model of the foot-ankle, and finally the coupled foot-boot model was generated by putting together the models of the lower limb and boot. In this study, the FE model of foot and ankle was validated during balance standing. There was a good agreement in the overall patterns of predicted and measured plantar pressure distribution published in literature. The coupled foot-boot model will be fully validated in the subsequent works under both static and dynamic loading conditions for further studies on injuries investigation in military and sports, foot wear design and characteristics of parachute landing impact in military. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.

    PubMed

    Jenness, Samuel M; Goodreau, Steven M; Morris, Martina

    2018-04-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.

  8. Mediterranean maquis fuel model development and mapping to support fire modeling

    NASA Astrophysics Data System (ADS)

    Bacciu, V.; Arca, B.; Pellizzaro, G.; Salis, M.; Ventura, A.; Spano, D.; Duce, P.

    2009-04-01

    Fuel load data and fuel model maps represent a critical issue for fire spread and behaviour modeling. The availability of accurate input data at different spatial and temporal scales can allow detailed analysis and predictions of fire hazard and fire effects across a landscape. Fuel model data are used in spatially explicit fire growth models to attain fire behaviour information for fuel management in prescribed fires, fire management applications, firefighters training, smoke emissions, etc. However, fuel type characteristics are difficult to be parameterized due to their complexity and variability: live and dead materials with different size contribute in different ways to the fire spread and behaviour. In the last decades, a strong help was provided by the use of remote sensing imagery at high spatial and spectral resolution. Such techniques are able to capture fine scale fuel distributions for accurate fire growth projections. Several attempts carried out in Europe were devoted to fuel classification and map characterization. In Italy, fuel load estimation and fuel model definition are still critical issues to be addressed due to the lack of detailed information. In this perspective, the aim of the present work was to propose an integrated approach based on field data collection, fuel model development and fuel model mapping to provide fuel models for the Mediterranean maquis associations. Field data needed for the development of fuel models were collected using destructive and non destructive measurements in experimental plots located in Northern Sardinia (Italy). Statistical tests were used to identify the main fuel types that were classified into four custom fuel models. Subsequently, a supervised classification by the Maximum Likelihood algorithm was applied on IKONOS images to identify and map the different types of maquis vegetation. The correspondent fuel model was then associated to each vegetation type to obtain the fuel model map. The results show the

  9. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  10. Reference Manual for the System Advisor Model's Wind Power Performance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, J.; Jorgenson, J.; Gilman, P.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface andmore » as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.« less

  11. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  12. The Carbon-Land Model Intercomparison Project (C-LAMP): A Model-Data Comparison System for Evaluation of Coupled Biosphere-Atmosphere Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forrest M; Randerson, Jim; Thornton, Peter E

    2009-01-01

    The need to capture important climate feebacks in general circulation models (GCMs) has resulted in new efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, now often referred to as Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results, suggesting that a more rigorous set of offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are warranted. The Carbon-Land Model Intercomparison Project (C-LAMP) providesmore » a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). C-LAMP provides feedback to the modeling community regarding model improvements and to the measurement community by suggesting new observational campaigns. C-LAMP Experiment 1 consists of a set of uncoupled simulations of terrestrial carbon models specifically designed to examine the ability of the models to reproduce surface carbon and energy fluxes at multiple sites and to exhibit the influence of climate variability, prescribed atmospheric carbon dioxide (CO{sub 2}), nitrogen (N) deposition, and land cover change on projections of terrestrial carbon fluxes during the 20th century. Experiment 2 consists of partially coupled simulations of the terrestrial carbon model with an active atmosphere model exchanging energy and moisture fluxes. In all experiments, atmospheric CO{sub 2} follows the prescribed historical trajectory from C{sup 4}MIP. In Experiment 2, the atmosphere model is forced with prescribed sea surface temperatures (SSTs) and corresponding sea ice concentrations from the Hadley Centre; prescribed CO{sub 2} is radiatively active; and land, fossil fuel, and ocean CO{sub 2} fluxes are advected by the model. Both sets of

  13. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  14. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Nutaro, James J

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigmmore » to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.« less

  15. Modelling the optical properties of aerosols in a chemical transport model

    NASA Astrophysics Data System (ADS)

    Andersson, E.; Kahnert, M.

    2015-12-01

    According to the IPCC fifth assessment report (2013), clouds and aerosols still contribute to the largest uncertainty when estimating and interpreting changes to the Earth's energy budget. Therefore, understanding the interaction between radiation and aerosols is both crucial for remote sensing observations and modelling the climate forcing arising from aerosols. Carbon particles are the largest contributor to the aerosol absorption of solar radiation, thereby enhancing the warming of the planet. Modelling the radiative properties of carbon particles is a hard task and involves many uncertainties arising from the difficulties of accounting for the morphologies and heterogeneous chemical composition of the particles. This study aims to compare two ways of modelling the optical properties of aerosols simulated by a chemical transport model. The first method models particle optical properties as homogeneous spheres and are externally mixed. This is a simple model that is particularly easy to use in data assimilation methods, since the optics model is linear. The second method involves a core-shell internal mixture of soot, where sulphate, nitrate, ammonia, organic carbon, sea salt, and water are contained in the shell. However, by contrast to previously used core-shell models, only part of the carbon is concentrated in the core, while the remaining part is homogeneously mixed with the shell. The chemical transport model (CTM) simulations are done regionally over Europe with the Multiple-scale Atmospheric Transport and CHemistry (MATCH) model, developed by the Swedish Meteorological and Hydrological Institute (SMHI). The MATCH model was run with both an aerosol dynamics module, called SALSA, and with a regular "bulk" approach, i.e., a mass transport model without aerosol dynamics. Two events from 2007 are used in the analysis, one with high (22/12-2007) and one with low (22/6-2007) levels of elemental carbon (EC) over Europe. The results of the study help to assess the

  16. Theoretical kinetic studies of models for binding myosin subfragment-1 to regulated actin: Hill model versus Geeves model.

    PubMed Central

    Chen , Y; Yan, B; Chalovich, J M; Brenner, B

    2001-01-01

    It was previously shown that a one-dimensional Ising model could successfully simulate the equilibrium binding of myosin S1 to regulated actin filaments (T. L. Hill, E. Eisenberg and L. Greene, Proc. Natl. Acad. Sci. U.S.A. 77:3186-3190, 1980). However, the time course of myosin S1 binding to regulated actin was thought to be incompatible with this model, and a three-state model was subsequently developed (D. F. McKillop and M. A. Geeves, Biophys. J. 65:693-701, 1993). A quantitative analysis of the predicted time course of myosin S1 binding to regulated actin, however, was never done for either model. Here we present the procedure for the theoretical evaluation of the time course of myosin S1 binding for both models and then show that 1) the Hill model can predict the "lag" in the binding of myosin S1 to regulated actin that is observed in the absence of Ca++ when S1 is in excess of actin, and 2) both models generate very similar families of binding curves when [S1]/[actin] is varied. This result shows that, just based on the equilibrium and pre-steady-state kinetic binding data alone, it is not possible to differentiate between the two models. Thus, the model of Hill et al. cannot be ruled out on the basis of existing pre-steady-state and equilibrium binding data. Physical mechanisms underlying the generation of the lag in the Hill model are discussed. PMID:11325734

  17. Modelling land use change with generalized linear models--a multi-model analysis of change between 1860 and 2000 in Gallatin Valley, Montana.

    PubMed

    Aspinall, Richard

    2004-08-01

    This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed

  18. The EMEP MSC-W chemical transport model - Part 1: Model description

    NASA Astrophysics Data System (ADS)

    Simpson, D.; Benedictow, A.; Berge, H.; Bergström, R.; Emberson, L. D.; Fagerli, H.; Hayman, G. D.; Gauss, M.; Jonson, J. E.; Jenkin, M. E.; Nyíri, A.; Richter, C.; Semeena, V. S.; Tsyro, S.; Tuovinen, J.-P.; Valdebenito, Á.; Wind, P.

    2012-02-01

    The Meteorological Synthesizing Centre-West (MSC-W) of the European Monitoring and Evaluation Programme (EMEP) has been performing model calculations in support of the Convention on Long Range Transboundary Air Pollution (CLRTAP) for more than 30 yr. The EMEP MSC-W chemical transport model is still one of the key tools within European air pollution policy assessments. Traditionally, the EMEP model has covered all of Europe with a resolution of about 50 × 50 km2, and extending vertically from ground level to the tropopause (100 hPa). The model has undergone substantial development in recent years, and is now applied on scales ranging from local (ca. 5 km grid size) to global (with 1 degree resolution). The model is used to simulate photo-oxidants and both inorganic and organic aerosols. In 2008 the EMEP model was released for the first time as public domain code, along with all required input data for model runs for one year. Since then, many changes have been made to the model physics, and input data. The second release of the EMEP MSC-W model became available in mid 2011, and a new release is targeted for early 2012. This publication is intended to document this third release of the EMEP MSC-W model. The model formulations are given, along with details of input data-sets which are used, and brief background on some of the choices made in the formulation are presented. The model code itself is available at www.emep.int, along with the data required to run for a full year over Europe.

  19. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  20. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  1. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. MODEL VERSION CONTROL FOR GREAT LAKES MODELS ON UNIX SYSTEMS

    EPA Science Inventory

    Scientific results of the Lake Michigan Mass Balance Project were provided where atrazine was measured and modeled. The presentation also provided the model version control system which has been used for models at Grosse Ile for approximately a decade and contains various version...

  3. Geometrical model for DBMS: an experimental DBMS using IBM solid modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, D.E.D.L.

    1985-01-01

    This research presents a new model for data base management systems (DBMS). The new model, Geometrical DBMS, is based on using solid modelling technology in designing and implementing DBMS. The Geometrical DBMS is implemented using the IBM solid modelling Geometric Design Processor (GDP). Built basically on computer-graphics concepts, Geometrical DBMS is indeed a unique model. Traditionally, researchers start with one of the existent DBMS models and then put a graphical front end on it. In Geometrical DBMS, the graphical aspect of the model is not an alien concept tailored to the model but is, as a matter of fact, themore » atom around which the model is designed. The main idea in Geometrical DBMS is to allow the user and the system to refer to and manipulate data items as a solid object in 3D space, and representing a record as a group of logically related solid objects. In Geometical DBMS, hierarchical structure is used to present the data relations and the user sees the data as a group of arrays; yet, for the user and the system together, the data structure is a multidimensional tree.« less

  4. AIDS Epidemiological models

    NASA Astrophysics Data System (ADS)

    Rahmani, Fouad Lazhar

    2010-11-01

    The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

  5. Equivalent model and power flow model for electric railway traction network

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-05-01

    An equivalent model of the Cable Traction Network (CTN) considering the distributed capacitance effect of the cable system is proposed. The model can be divided into 110kV side and 27.5kV side two kinds. The 110kV side equivalent model can be used to calculate the power supply capacity of the CTN. The 27.5kV side equivalent model can be used to solve the voltage of the catenary. Based on the equivalent simplified model of CTN, the power flow model of CTN which involves the reactive power compensation coefficient and the interaction of voltage and current, is derived.

  6. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks

    PubMed Central

    Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina

    2018-01-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699

  7. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    PubMed

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  8. Animation Augmented Reality Book Model (AAR Book Model) to Enhance Teamwork

    ERIC Educational Resources Information Center

    Chujitarom, Wannaporn; Piriyasurawong, Pallop

    2017-01-01

    This study aims to synthesize an Animation Augmented Reality Book Model (AAR Book Model) to enhance teamwork and to assess the AAR Book Model to enhance teamwork. Samples are five specialists that consist of one animation specialist, two communication and information technology specialists, and two teaching model design specialists, selected by…

  9. ENSO Simulation in Coupled Ocean-Atmosphere Models: Are the Current Models Better?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    AchutaRao, K; Sperber, K R

    Maintaining a multi-model database over a generation or more of model development provides an important framework for assessing model improvement. Using control integrations, we compare the simulation of the El Nino/Southern Oscillation (ENSO), and its extratropical impact, in models developed for the 2007 Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report with models developed in the late 1990's (the so-called Coupled Model Intercomparison Project-2 [CMIP2] models). The IPCC models tend to be more realistic in representing the frequency with which ENSO occurs, and they are better at locating enhanced temperature variability over the eastern Pacific Ocean. When compared withmore » reanalyses, the IPCC models have larger pattern correlations of tropical surface air temperature than do the CMIP2 models during the boreal winter peak phase of El Nino. However, for sea-level pressure and precipitation rate anomalies, a clear separation in performance between the two vintages of models is not as apparent. The strongest improvement occurs for the modeling groups whose CMIP2 model tended to have the lowest pattern correlations with observations. This has been checked by subsampling the multi-century IPCC simulations in a manner to be consistent with the single 80-year time segment available from CMIP2. Our results suggest that multi-century integrations may be required to statistically assess model improvement of ENSO. The quality of the El Nino precipitation composite is directly related to the fidelity of the boreal winter precipitation climatology, highlighting the importance of reducing systematic model error. Over North America distinct improvement of El Nino forced boreal winter surface air temperature, sea-level pressure, and precipitation rate anomalies in the IPCC models occurs. This improvement, is directly proportional to the skill of the tropical El Nino forced precipitation anomalies.« less

  10. Predictive Capability Maturity Model for computational modeling and simulation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronauticsmore » and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.« less

  11. Identification of walking human model using agent-based modelling

    NASA Astrophysics Data System (ADS)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  12. Experimental & Numerical Modeling of Non-combusting Model Firebrands' Transport

    NASA Astrophysics Data System (ADS)

    Tohidi, Ali; Kaye, Nigel

    2016-11-01

    Fire spotting is one of the major mechanisms of wildfire spread. Three phases of this phenomenon are firebrand formation and break-off from burning vegetation, lofting and downwind transport of firebrands through the velocity field of the wildfire, and spot fire ignition upon landing. The lofting and downwind transport phase is modeled by conducting large-scale wind tunnel experiments. Non-combusting rod-like model firebrands with different aspect ratios are released within the velocity field of a jet in a boundary layer cross-flow that approximates the wildfire velocity field. Characteristics of the firebrand dispersion are quantified by capturing the full trajectory of the model firebrands using the developed image processing algorithm. The results show that the lofting height has a direct impact on the maximum travel distance of the model firebrands. Also, the experimental results are utilized for validation of a highly scalable coupled stochastic & parametric firebrand flight model that, couples the LES-resolved velocity field of a jet-in-nonuniform-cross-flow (JINCF) with a 3D fully deterministic 6-degrees-of-freedom debris transport model. The validation results show that the developed numerical model is capable of estimating average statistics of the firebrands' flight. Authors would like to thank support of the National Science Foundation under Grant No. 1200560. Also, the presenter (Ali Tohid) would like to thank Dr. Michael Gollner from the University of Maryland College Park for the conference participation support.

  13. Modelling and model predictive control for a bicycle-rider system

    NASA Astrophysics Data System (ADS)

    Chu, T. D.; Chen, C. K.

    2018-01-01

    This study proposes a bicycle-rider control model based on model predictive control (MPC). First, a bicycle-rider model with leaning motion of the rider's upper body is developed. The initial simulation data of the bicycle rider are then used to identify the linear model of the system in state-space form for MPC design. Control characteristics of the proposed controller are assessed by simulating the roll-angle tracking control. In this riding task, the MPC uses steering and leaning torques as the control inputs to control the bicycle along a reference roll angle. The simulation results in different cases have demonstrated the applicability and performance of the MPC for bicycle-rider modelling.

  14. One-month validation of the Space Weather Modeling Framework geospace model

    NASA Astrophysics Data System (ADS)

    Haiducek, J. D.; Welling, D. T.; Ganushkina, N. Y.; Morley, S.; Ozturk, D. S.

    2017-12-01

    The Space Weather Modeling Framework (SWMF) geospace model consists of a magnetohydrodynamic (MHD) simulation coupled to an inner magnetosphere model and an ionosphere model. This provides a predictive capability for magnetopsheric dynamics, including ground-based and space-based magnetic fields, geomagnetic indices, currents and densities throughout the magnetosphere, cross-polar cap potential, and magnetopause and bow shock locations. The only inputs are solar wind parameters and F10.7 radio flux. We have conducted a rigorous validation effort consisting of a continuous simulation covering the month of January, 2005 using three different model configurations. This provides a relatively large dataset for assessment of the model's predictive capabilities. We find that the model does an excellent job of predicting the Sym-H index, and performs well at predicting Kp and CPCP during active times. Dayside magnetopause and bow shock positions are also well predicted. The model tends to over-predict Kp and CPCP during quiet times and under-predicts the magnitude of AL during disturbances. The model under-predicts the magnitude of night-side geosynchronous Bz, and over-predicts the radial distance to the flank magnetopause and bow shock. This suggests that the model over-predicts stretching of the magnetotail and the overall size of the magnetotail. With the exception of the AL index and the nightside geosynchronous magnetic field, we find the results to be insensitive to grid resolution.

  15. Model annotation for synthetic biology: automating model to nucleotide sequence conversion

    PubMed Central

    Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil

    2011-01-01

    Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753

  16. Using Structural Equation Modeling To Fit Models Incorporating Principal Components.

    ERIC Educational Resources Information Center

    Dolan, Conor; Bechger, Timo; Molenaar, Peter

    1999-01-01

    Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…

  17. Modeling for Battery Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick

    2017-01-01

    For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient

  18. Model Validation Status Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.L. Hardin

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified,more » and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural

  19. JEDI Natural Gas Model | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Natural Gas Model JEDI Natural Gas Model The Jobs and Economic Development Impacts (JEDI) Natural Gas model allows users to estimate economic development impacts from natural gas power generation -specific data should be used to obtain the best estimate of economic development impacts. This model has

  20. Model identification using stochastic differential equation grey-box models in diabetes.

    PubMed

    Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik

    2013-03-01

    The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.

  1. The Space Weather Modeling Framework (SWMF): Models and Validation

    NASA Astrophysics Data System (ADS)

    Gombosi, Tamas; Toth, Gabor; Sokolov, Igor; de Zeeuw, Darren; van der Holst, Bart; Ridley, Aaron; Manchester, Ward, IV

    In the last decade our group at the Center for Space Environment Modeling (CSEM) has developed the Space Weather Modeling Framework (SWMF) that efficiently couples together different models describing the interacting regions of the space environment. Many of these domain models (such as the global solar corona, the inner heliosphere or the global magneto-sphere) are based on MHD and are represented by our multiphysics code, BATS-R-US. SWMF is a powerful tool for coupling regional models describing the space environment from the solar photosphere to the bottom of the ionosphere. Presently, SWMF contains over a dozen components: the solar corona (SC), eruptive event generator (EE), inner heliosphere (IE), outer heliosphere (OH), solar energetic particles (SE), global magnetosphere (GM), inner magnetosphere (IM), radiation belts (RB), plasmasphere (PS), ionospheric electrodynamics (IE), polar wind (PW), upper atmosphere (UA) and lower atmosphere (LA). This talk will present an overview of SWMF, new results obtained with improved physics as well as some validation studies.

  2. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  3. Semi-automated Modular Program Constructor for physiological modeling: Building cell and organ models.

    PubMed

    Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B

    2015-01-01

    The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.

  4. Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models

    PubMed Central

    Liu, Ziyue; Cappola, Anne R.; Crofford, Leslie J.; Guo, Wensheng

    2013-01-01

    The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls. PMID:24729646

  5. The VSGB 2.0 Model: A Next Generation Energy Model for High Resolution Protein Structure Modeling

    PubMed Central

    Li, Jianing; Abel, Robert; Zhu, Kai; Cao, Yixiang; Zhao, Suwen; Friesner, Richard A.

    2011-01-01

    A novel energy model (VSGB 2.0) for high resolution protein structure modeling is described, which features an optimized implicit solvent model as well as physics-based corrections for hydrogen bonding, π-π interactions, self-contact interactions and hydrophobic interactions. Parameters of the VSGB 2.0 model were fit to a crystallographic database of 2239 single side chain and 100 11–13 residue loop predictions. Combined with an advanced method of sampling and a robust algorithm for protonation state assignment, the VSGB 2.0 model was validated by predicting 115 super long loops up to 20 residues. Despite the dramatically increasing difficulty in reconstructing longer loops, a high accuracy was achieved: all of the lowest energy conformations have global backbone RMSDs better than 2.0 Å from the native conformations. Average global backbone RMSDs of the predictions are 0.51, 0.63, 0.70, 0.62, 0.80, 1.41, and 1.59 Å for 14, 15, 16, 17, 18, 19, and 20 residue loop predictions, respectively. When these results are corrected for possible statistical bias as explained in the text, the average global backbone RMSDs are 0.61, 0.71, 0.86, 0.62, 1.06, 1.67, and 1.59 Å. Given the precision and robustness of the calculations, we believe that the VSGB 2.0 model is suitable to tackle “real” problems, such as biological function modeling and structure-based drug discovery. PMID:21905107

  6. Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics

    PubMed Central

    Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E

    2017-01-01

    This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052

  7. Comparison between fully distributed model and semi-distributed model in urban hydrology modeling

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Giangola-Murzyn, Agathe; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe

    2013-04-01

    Water management in urban areas is becoming more and more complex, especially because of a rapid increase of impervious areas. There will also possibly be an increase of extreme precipitation due to climate change. The aims of the devices implemented to handle the large amount of water generate by urban areas such as storm water retention basins are usually twofold: ensure pluvial flood protection and water depollution. These two aims imply opposite management strategies. To optimize the use of these devices there is a need to implement urban hydrological models and improve fine-scale rainfall estimation, which is the most significant input. In this paper we suggest to compare two models and their sensitivity to small-scale rainfall variability on a 2.15 km2 urban area located in the County of Val-de-Marne (South-East of Paris, France). The average impervious coefficient is approximately 34%. In this work two types of models are used. The first one is CANOE which is semi-distributed. Such models are widely used by practitioners for urban hydrology modeling and urban water management. Indeed, they are easily configurable and the computation time is reduced, but these models do not take fully into account either the variability of the physical properties or the variability of the precipitations. An alternative is to use distributed models that are harder to configure and require a greater computation time, but they enable a deeper analysis (especially at small scales and upstream) of the processes at stake. We used the Multi-Hydro fully distributed model developed at the Ecole des Ponts ParisTech. It is an interacting core between open source software packages, each of them representing a portion of the water cycle in urban environment. Four heavy rainfall events that occurred between 2009 and 2011 are analyzed. The data comes from the Météo-France radar mosaic and the resolution is 1 km in space and 5 min in time. The closest radar of the Météo-France network is

  8. Using Data-Driven Model-Brain Mappings to Constrain Formal Models of Cognition

    PubMed Central

    Borst, Jelmer P.; Nijboer, Menno; Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John R.

    2015-01-01

    In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings. PMID:25747601

  9. Distinguishing Antimicrobial Models with Different Resistance Mechanisms via Population Pharmacodynamic Modeling

    PubMed Central

    Jacobs, Matthieu; Grégoire, Nicolas; Couet, William; Bulitta, Jurgen B.

    2016-01-01

    Semi-mechanistic pharmacokinetic-pharmacodynamic (PK-PD) modeling is increasingly used for antimicrobial drug development and optimization of dosage regimens, but systematic simulation-estimation studies to distinguish between competing PD models are lacking. This study compared the ability of static and dynamic in vitro infection models to distinguish between models with different resistance mechanisms and support accurate and precise parameter estimation. Monte Carlo simulations (MCS) were performed for models with one susceptible bacterial population without (M1) or with a resting stage (M2), a one population model with adaptive resistance (M5), models with pre-existing susceptible and resistant populations without (M3) or with (M4) inter-conversion, and a model with two pre-existing populations with adaptive resistance (M6). For each model, 200 datasets of the total bacterial population were simulated over 24h using static antibiotic concentrations (256-fold concentration range) or over 48h under dynamic conditions (dosing every 12h; elimination half-life: 1h). Twelve-hundred random datasets (each containing 20 curves for static or four curves for dynamic conditions) were generated by bootstrapping. Each dataset was estimated by all six models via population PD modeling to compare bias and precision. For M1 and M3, most parameter estimates were unbiased (<10%) and had good imprecision (<30%). However, parameters for adaptive resistance and inter-conversion for M2, M4, M5 and M6 had poor bias and large imprecision under static and dynamic conditions. For datasets that only contained viable counts of the total population, common statistical criteria and diagnostic plots did not support sound identification of the true resistance mechanism. Therefore, it seems advisable to quantify resistant bacteria and characterize their MICs and resistance mechanisms to support extended simulations and translate from in vitro experiments to animal infection models and

  10. Agricultural model intercomparison and improvement project: Overview of model intercomparisons

    USDA-ARS?s Scientific Manuscript database

    Improvement of crop simulation models to better estimate growth and yield is one of the objectives of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The overall goal of AgMIP is to provide an assessment of crop model through rigorous intercomparisons and evaluate future clim...

  11. Modeling crop water productivity using a coupled SWAT-MODSIM model

    USDA-ARS?s Scientific Manuscript database

    This study examines the water productivity of irrigated wheat and maize yields in Karkheh River Basin (KRB) in the semi-arid region of Iran using a coupled modeling approach consisting of the hydrological model (SWAT) and the river basin water allocation model (MODSIM). Dynamic irrigation requireme...

  12. Food addiction prevalence and concurrent validity in African American adolescents with obesity.

    PubMed

    Schulte, Erica M; Jacques-Tiura, Angela J; Gearhardt, Ashley N; Naar, Sylvie

    2018-03-01

    Food addiction, measured by the Yale Food Addiction Scale (YFAS), has been associated with obesity, eating-related problems (e.g., bingeing), and problematic consumption of highly processed foods. Studies on this topic have primarily examined adult samples with an overrepresentation of White individuals, and little is known about addictive-like eating in adolescents, particularly African American adolescents who exhibit high rates of obesity and eating pathology. The current study examined the prevalence of food addiction and its convergent validity with percent overweight, eating-related problems, and self-reported dietary intake in a sample of 181 African American adolescents with obesity. Approximately 10% of participants met for food addiction, measured by the YFAS for children (YFAS-C). YFAS-C scores were most strongly associated with objective binge episodes (OBE), though significant relationships were also observed with objective overeating episodes (OOE), percent overweight relative to age- and sex-adjusted body mass index (BMI), and, more modestly, subjective binge episodes (SBE). YFAS-C scores were also related to greater consumption of all nutrient characteristics of interest (calories, fat, saturated fat, trans fat, carbohydrates, sugar, added sugar), though most strongly with trans fat, a type of fat found most frequently in highly processed foods. These findings suggest that the combination of exhibiting a loss of control while consuming an objectively large amount of food seems to be most implicated in food addiction for African American adolescents with obesity. The present work also provides evidence that individuals with food addiction may consume elevated quantities of highly processed foods, relative to those without addictive-like eating. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Investigation of prospective teachers' knowledge and understanding of models and modeling and their attitudes towards the use of models in science education

    NASA Astrophysics Data System (ADS)

    Aktan, Mustafa B.

    The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of

  14. Probabilistic Modeling and Visualization of the Flexibility in Morphable Models

    NASA Astrophysics Data System (ADS)

    Lüthi, M.; Albrecht, T.; Vetter, T.

    Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.

  15. Modeling rainfall-runoff relationship using multivariate GARCH model

    NASA Astrophysics Data System (ADS)

    Modarres, R.; Ouarda, T. B. M. J.

    2013-08-01

    The traditional hydrologic time series approaches are used for modeling, simulating and forecasting conditional mean of hydrologic variables but neglect their time varying variance or the second order moment. This paper introduces the multivariate Generalized Autoregressive Conditional Heteroscedasticity (MGARCH) modeling approach to show how the variance-covariance relationship between hydrologic variables varies in time. These approaches are also useful to estimate the dynamic conditional correlation between hydrologic variables. To illustrate the novelty and usefulness of MGARCH models in hydrology, two major types of MGARCH models, the bivariate diagonal VECH and constant conditional correlation (CCC) models are applied to show the variance-covariance structure and cdynamic correlation in a rainfall-runoff process. The bivariate diagonal VECH-GARCH(1,1) and CCC-GARCH(1,1) models indicated both short-run and long-run persistency in the conditional variance-covariance matrix of the rainfall-runoff process. The conditional variance of rainfall appears to have a stronger persistency, especially long-run persistency, than the conditional variance of streamflow which shows a short-lived drastic increasing pattern and a stronger short-run persistency. The conditional covariance and conditional correlation coefficients have different features for each bivariate rainfall-runoff process with different degrees of stationarity and dynamic nonlinearity. The spatial and temporal pattern of variance-covariance features may reflect the signature of different physical and hydrological variables such as drainage area, topography, soil moisture and ground water fluctuations on the strength, stationarity and nonlinearity of the conditional variance-covariance for a rainfall-runoff process.

  16. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  17. Efficient polarimetric BRDF model.

    PubMed

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  18. MULTIVARIATE RECEPTOR MODELS AND MODEL UNCERTAINTY. (R825173)

    EPA Science Inventory

    Abstract

    Estimation of the number of major pollution sources, the source composition profiles, and the source contributions are the main interests in multivariate receptor modeling. Due to lack of identifiability of the receptor model, however, the estimation cannot be...

  19. Global and regional ecosystem modeling: comparison of model outputs and field measurements

    NASA Astrophysics Data System (ADS)

    Olson, R. J.; Hibbard, K.

    2003-04-01

    The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of

  20. Level-Specific Evaluation of Model Fit in Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Ryu, Ehri; West, Stephen G.

    2009-01-01

    In multilevel structural equation modeling, the "standard" approach to evaluating the goodness of model fit has a potential limitation in detecting the lack of fit at the higher level. Level-specific model fit evaluation can address this limitation and is more informative in locating the source of lack of model fit. We proposed level-specific test…