Abraham, Tobin M.; Massaro, Joseph M.; Hoffmann, D. Udo; Yanovski, Jack A.; Fox, Caroline S.
2014-01-01
OBJECTIVE To describe the metabolic profile of individuals with objective binge eating (OBE) and to evaluate whether associations between OBE and metabolic risk factors are mediated by body mass index (BMI). DESIGN AND METHODS Participants from the Framingham Heart Study, Third Generation and Omni 2 cohorts (n = 3551, 53.1% women, mean age 46.4 years) were screened for binge eating. We used multivariable-adjusted regression models to examine the associations of OBE with metabolic risk factors. RESULTS The prevalence of OBE was 4.8% in women and 4.9% in men. Compared to non-binge eating, OBE was associated with higher odds of hypertension (OR 1.85, 95% CI 1.32–2.60), hypertriglyceridemia (OR 1.42, 95% CI 1.01–2.01), low HDL (OR 1.70, 95% CI 1.18–2.44), insulin resistance (OR 3.18, 95% CI 2.25–4.50) and metabolic syndrome (OR 2.75, 95% CI 1.94–3.90). Fasting glucose was 7.2 mg/dl higher in those with OBE (p=0.0001). Individuals with OBE had more visceral, subcutaneous and liver fat. Most of these associations were attenuated with adjustment for BMI, with the exception of fasting glucose. CONCLUSIONS Binge eating is associated with a high burden of metabolic risk factors. Much of the associated risk appears to be mediated by BMI, with the exception of fasting glucose. PMID:25136837
A Narrative Synthesis of Women's Out-of-Body Experiences During Childbirth.
Bateman, Lynda; Jones, Catriona; Jomeen, Julie
2017-07-01
Some women have a dissociated, out-of-body experience (OBE) during childbirth, which may be described as seeing the body from above or floating above the body. This review examines this phenomenon using narratives from women who have experienced intrapartum OBEs. A narrative synthesis of qualitative research was employed to systematically synthesize OBE narratives from existing studies. Strict inclusion and exclusion criteria were applied. The included papers were critiqued by 2 of the authors to determine the appropriateness of the narrative synthesis method, procedural transparency, and soundness of the interpretive approach. Women experiencing OBEs during labor and birth report a disembodied state in the presence of stress or trauma. Three forms of OBEs are described: floating above the scene, remaining close to the scene, or full separation of a body part from the main body. Women had clear recall of OBEs, describing the experience and point of occurrence. Women who reported OBEs had experienced current or previous traumatic childbirth, or trauma in a non-birth situation. OBEs as prosaic experiences were not identified. OBEs are part of the lived experience of some women giving birth. The OBEs in this review were trauma related with some women disclosing previous posttraumatic stress disorder (PTSD). It is not evident whether there is a connection between PTSD and OBEs at present, and OBEs may serve as a potential coping mechanism in the presence of trauma. Clinicians should legitimize women's disclosure of OBEs and explore and ascertain their impact, either as a normal coping mechanism or a precursor to perinatal mental illness. Research into the function of OBEs and any relationship to PTSD may assist in early interventions for childbearing women. © 2017 by the American College of Nurse-Midwives.
Mohieldein, Abdelmarouf H
2017-03-01
The rapid change worldwide, as a consequence of advances in science and technology, necessitates the graduation of well-qualified graduates who have the appropriate knowledge and skills to fulfill specific work requirements. Hence, redesigning academic models by focusing on educational outcomes became the target and priority for universities around the world. In this systematic review we collected and retrieved literature using a selection of electronic databases. The objectives of this report is to: 1) provide an overview of the evolution of outcome-based education (OBE), (2) illustrate the philosophy and principle of OBE, (3) list the OBE advantages and benefits, (4) describe the assessment strategies used in OBE, and (5) discuss the role of teachers and students as key elements. In conclusion, there is growing interest by the Saudi government to provide student-centered education in their institutes of higher education to graduate students with the necessary knowledge and skill experiences. Moreover, OBE is considered a holistic approach which offers a powerful and appealing way of reforming and managing medical education for mastery in learning and to meet the prerequisites for local and international accreditation.
OBE EAP-EOP Model: A Proposed Instructional Design in English for Specific Purposes
ERIC Educational Resources Information Center
Hernandez, Hjalmar Punla
2016-01-01
Outcome-Based Education (OBE) demands innovative Instructional Designs (ID) in the 21st century. Being a descriptive-qualitative research, this paper aimed to (1) identify the ID used in the English language curricula of a private tertiary level institution in the Southern Luzon, Philippines, (2) determined the elements that the ID of the English…
From Special Education to an Inclusive, Outcomes-Based System.
ERIC Educational Resources Information Center
Naicker, Sigamoney
2001-01-01
This article discusses shifting from special education to inclusive, outcomes-based education (OBE) in South Africa. It examines why there is a shift toward OBE, different educational paradigms, and shifting from fundamental pedagogy to OBE. Necessary changes are highlighted, and include a shift from classification to using OBE for progression and…
Outcomes Based Education Re-Examined: From Structural Functionalism to Poststructuralism.
ERIC Educational Resources Information Center
Capper, Colleen A.; Jamison, Michael T.
Outcomes Based Education (OBE) is viewed as a drastic break from current educational practices and a means of providing educational success for all students. OBE is also advocated as a practice that lead to educational inequity. This paper reexamines OBE from a multiparadigm perspective of organizations and educational administration. OBE is based…
Alluvial Bars of the Obed Wild and Scenic River, Tennessee
Wolfe, W.J.; Fitch, K.C.; Ladd, D.E.
2007-01-01
In 2004, the U.S. Geological Survey (USGS) and the National Park Service (NPS) initiated a reconnaissance study of alluvial bars along the Obed Wild and Scenic River (Obed WSR), in Cumberland and Morgan Counties, Tennessee. The study was partly driven by concern that trapping of sand by upstream impoundments might threaten rare, threatened, or endangered plant habitat by reducing the supply of sediment to the alluvial bars. The objectives of the study were to: (1) develop a preliminary understanding of the distribution, morphology, composition, stability, and vegetation structure of alluvial bars along the Obed WSR, and (2) determine whether evidence of human alteration of sediment dynamics in the Obed WSR warrants further, more detailed examination. This report presents the results of the reconnaissance study of alluvial bars along the Obed River, Clear Creek, and Daddys Creek in the Obed WSR. The report is based on: (1) field-reconnaissance visits by boat to 56 alluvial bars along selected reaches of the Obed River and Clear Creek; (2) analysis of aerial photographs, topographic and geologic maps, and other geographic data to assess the distribution of alluvial bars in the Obed WSR; (3) surveys of topography, surface particle size, vegetation structure, and ground cover on three selected alluvial bars; and (4) analysis of hydrologic records.
Relativistic proton-nucleus scattering and one-boson-exchange models
NASA Technical Reports Server (NTRS)
Maung, Khin Maung; Gross, Franz; Tjon, J. A.; Townsend, L. W.; Wallace, S. J.
1993-01-01
Relativistic p-(Ca-40) elastic scattering observables are calculated using four sets of relativistic NN amplitudes obtained from different one-boson-exchange (OBE) models. The first two sets are based upon a relativistic equation in which one particle is on mass shell and the other two sets are obtained from a quasipotential reduction of the Bethe-Salpeter equation. Results at 200, 300, and 500 MeV are presented for these amplitudes. Differences between the predictions of these models provide a study of the uncertainty in constructing Dirac optical potentials from OBE-based NN amplitudes.
Morrison, S Y; Pastor, J J; Quintela, J C; Holst, J J; Hartmann, B; Drackley, J K; Ipharraguerre, I R
2017-03-01
Diarrhea episodes in dairy calves involve profound alterations in the mechanism controlling gut barrier function that ultimately compromise intestinal permeability to macromolecules, including pathogenic bacteria. Intestinal dysfunction models suggest that a key element of intestinal adaptation during the neonatal phase is the nutrient-induced secretion of glucagon-like peptide (GLP)-2 and associated effects on mucosal cell proliferation, barrier function, and inflammatory response. Bioactive molecules found in Olea europaea have been shown to induce the release of regulatory peptides from model enteroendocrine cells. The ability to enhance GLP-2 secretion via the feeding of putative GLP-2 secretagogues is untested in newborn calves. The objectives of this study were to determine whether feeding a bioactive extract from Olea europaea (OBE) mixed in the milk replacer (1) can stimulate GLP-2 secretion beyond the response elicited by enteral nutrients and, thereby, (2) improve intestinal permeability and animal growth as well as (3) reduce the incidence of diarrhea in preweaning dairy calves. Holstein heifer calves (n = 60) were purchased, transported to the research facility, and blocked by body weight and total serum protein and assigned to 1 of 3 treatments. Treatments were control (CON), standard milk replacer (MR) and ad libitum starter; CON plus OBE added into MR at 30 mg/kg of body weight (OBE30); and CON plus OBE added into MR at 60 mg/kg of body weight (OBE60). The concentration of GLP-2 was measured at the end of wk 2. Intestinal permeability was measured at the onset of the study and the end of wk 2 and 6, with lactulose and d-mannitol as markers. Treatments did not affect calf growth and starter intake. Compared with CON, administration of OBE60 increased the nutrient-induced response in GLP-2 by about 1 fold and reduced MR intake during the second week of study. Throughout the study, however, all calves had compromised intestinal permeability and a high incidence of diarrhea. The GLP-2 response elicited by OBE60 did not improve intestinal permeability (lactulose-to-d-mannitol ratio) and incidence of diarrhea over the course of the preweaning period. The response in GLP-2 secretion to the administration of OBE reported herein warrants further research efforts to investigate the possibility of improving intestinal integrity through GLP-2 secretion in newborn calves. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Fitzsimmons-Craft, Ellen E.; Ciao, Anna C.; Accurso, Erin C.; Pisetsky, Emily M.; Peterson, Carol B.; Byrne, Catherine E.; Le Grange, Daniel
2014-01-01
This study investigated the importance of the distinction between objective (OBE) and subjective binge eating (SBE) among 80 treatment-seeking adolescents with bulimia nervosa (BN). We explored relationships among OBEs, SBEs, eating disorder (ED) symptomatology, depression, and self-esteem using two approaches. Group comparisons showed that OBE and SBE groups did not differ on ED symptoms or self-esteem; however, the SBE group had significantly greater depression. Examining continuous variables, OBEs (not SBEs) accounted for significant unique variance in global ED pathology, vomiting, and self-esteem. SBEs (not OBEs) accounted for significant unique variance in restraint and depression. Both OBEs and SBEs accounted for significant unique variance in eating concern; neither accounted for unique variance in weight/shape concern, laxative use, diuretic use, or driven exercise. Loss of control, rather than amount of food, may be most important in defining binge eating. Additionally, OBEs may indicate broader ED pathology while SBEs may indicate restrictive/depressive symptomatology. PMID:24852114
Fitzsimmons-Craft, Ellen E; Ciao, Anna C; Accurso, Erin C; Pisetsky, Emily M; Peterson, Carol B; Byrne, Catherine E; Le Grange, Daniel
2014-07-01
This study investigated the importance of the distinction between objective (OBE) and subjective binge eating (SBE) among 80 treatment-seeking adolescents with bulimia nervosa. We explored relationships among OBEs, SBEs, eating disorder (ED) symptomatology, depression, and self-esteem using two approaches. Group comparisons showed that OBE and SBE groups did not differ on ED symptoms or self-esteem; however, the SBE group had significantly greater depression. Examining continuous variables, OBEs (not SBEs) accounted for significant unique variance in global ED pathology, vomiting, and self-esteem. SBEs (not OBEs) accounted for significant unique variance in restraint and depression. Both OBEs and SBEs accounted for significant unique variance in eating concern; neither accounted for unique variance in weight/shape concern, laxative use, diuretic use, or driven exercise. Loss of control, rather than amount of food, may be most important in defining binge eating. Additionally, OBEs may indicate broader ED pathology, while SBEs may indicate restrictive/depressive symptomatology. Copyright © 2014 John Wiley & Sons, Ltd and Eating Disorders Association.
Hydrologic data for the Obed River watershed, Tennessee
Knight, Rodney R.; Wolfe, William J.; Law, George S.
2014-01-01
The Obed River watershed drains a 520-square-mile area of the Cumberland Plateau physiographic region in the Tennessee River basin. The watershed is underlain by conglomerate, sandstone, and shale of Pennsylvanian age, which overlie Mississippian-age limestone. The larger creeks and rivers of the Obed River system have eroded gorges through the conglomerate and sandstone into the deeper shale. The largest gorges are up to 400 feet deep and are protected by the Wild and Scenic Rivers Act as part of the Obed Wild and Scenic River, which is managed by the National Park Service. The growing communities of Crossville and Crab Orchard, Tennessee, are located upstream of the gorge areas of the Obed River watershed. The cities used about 5.8 million gallons of water per day for drinking water in 2010 from Lake Holiday and Stone Lake in the Obed River watershed and Meadow Park Lake in the Caney Fork River watershed. The city of Crossville operates a wastewater treatment plant that releases an annual average of about 2.2 million gallons per day of treated effluent to the Obed River, representing as much as 10 to 40 percent of the monthly average streamflow of the Obed River near Lancing about 35 miles downstream, during summer and fall. During the past 50 years (1960–2010), several dozen tributary impoundments and more than 2,000 small farm ponds have been constructed in the Obed River watershed. Synoptic streamflow measurements indicate a tendency towards dampened high flows and slightly increased low flows as the percentage of basin area controlled by impoundments increases.
Beyond Traditional Outcome-Based Education.
ERIC Educational Resources Information Center
Spady, William G.; Marshall, Kit J.
1991-01-01
Transitional outcome-based education lies in the twilight zone between traditional subject matter curriculum structures and planning processes and the future-role priorities inherent in transformational OBE. Districts go through incorporation, integration, and redefinition stages in implementing transitional OBE. Transformational OBE's guiding…
Morcke, Anne Mette; Dornan, Tim; Eika, Berit
2013-10-01
Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the underpinnings of OBE: its historical origins, theoretical basis, and empirical evidence of its effects in order to answer the question: How can predetermined learning outcomes influence undergraduate medical education? This literature review had three components: A review of historical landmarks in the evolution of OBE; a review of conceptual frameworks and theories; and a systematic review of empirical publications from 1999 to 2010 that reported data concerning the effects of learning outcomes on undergraduate medical education. OBE had its origins in behaviourist theories of learning. It is tightly linked to the assessment and regulation of proficiency, but less clearly linked to teaching and learning activities. Over time, there have been cycles of advocacy for, then criticism of, OBE. A recurring critique concerns the place of complex personal and professional attributes as "competencies". OBE has been adopted by consensus in the face of weak empirical evidence. OBE, which has been advocated for over 50 years, can contribute usefully to defining requisite knowledge and skills, and blueprinting assessments. Its applicability to more complex aspects of clinical performance is not clear. OBE, we conclude, provides a valuable approach to some, but not all, important aspects of undergraduate medical education.
Higher order thinking skills competencies required by outcomes-based education from learners.
Chabeli, M M
2006-08-01
Outcomes-Based Education (OBE) brought about a significant paradigm shift in the education and training of learners in South Africa. OBE requires a shift from focusing on the teacher input (instruction offerings or syllabuses expressed in terms of content), to focusing on learner outcomes. OBE is moving away from 'transmission' models to constructivistic, learner-centered models that put emphasis on learning as an active process (Nieburh, 1996:30). Teachers act as facilitators and mediators of learning (Norms and Standards, Government Gazette vol 415, no 20844 of 2000). Facilitators are responsible to create the environment that is conducive for learners to construct their own knowledge, skills and values through interaction (Peters, 2000). The first critical cross-field outcome accepted by the South African Qualification Framework (SAQA) is that learners should be able to identify and solve problems by using critical and creative thinking skills. This paper seeks to explore some higher order thinking skills competencies required by OBE from learners such as critical thinking, reflective thinking, creative thinking, dialogic / dialectic thinking, decision making, problem solving and emotional intelligence and their implications in facilitating teaching and learning from the theoretical perspective. The philosophical underpinning of these higher order thinking skills is described to give direction to the study. It is recommended that a study focusing on the assessment of these intellectual concepts be made. The study may be qualitative, quantitative or mixed methods in nature (Creswell 2005).
Antonov, Ivan O; Barker, Beau J; Heaven, Michael C
2011-01-28
The ground electronic state of BeOBe(+) was probed using the pulsed-field ionization zero electron kinetic energy photoelectron technique. Spectra were rotationally resolved and transitions to the zero-point level, the symmetric stretch fundamental and first two bending vibrational levels were observed. The rotational state symmetry selection rules confirm that the ground electronic state of the cation is (2)Σ(g)(+). Detachment of an electron from the HOMO of neutral BeOBe results in little change in the vibrational or rotational constants, indicating that this orbital is nonbonding in nature. The ionization energy of BeOBe [65480(4) cm(-1)] was refined over previous measurements. Results from recent theoretical calculations for BeOBe(+) (multireference configuration interaction) were found to be in good agreement with the experimental data.
ERIC Educational Resources Information Center
Deneen, Christopher; Brown, Gavin T. L.; Bond, Trevor G.; Shroff, Ronnie
2013-01-01
Outcome-based education (OBE) is a current initiative in Hong Kong universities, with widespread backing by governments and standards bodies. However, study of students' perceptions of OBE and validation of understanding these perceptions are lacking. This paper reports on the validation of an OBE-specific instrument and resulting preliminary…
The body unbound: vestibular-motor hallucinations and out-of-body experiences.
Cheyne, J Allan; Girard, Todd A
2009-02-01
Among the varied hallucinations associated with sleep paralysis (SP), out-of-body experiences (OBEs) and vestibular-motor (V-M) sensations represent a distinct factor. Recent studies of direct stimulation of vestibular cortex report a virtually identical set of bodily-self hallucinations. Both programs of research agree on numerous details of OBEs and V-M experiences and suggest similar hypotheses concerning their association. In the present study, self-report data from two on-line surveys of SP-related experiences were employed to assess hypotheses concerning the causal structure of relations among V-M experiences and OBEs during SP episodes. The results complement neurophysiological evidence and are consistent with the hypothesis that OBEs represent a breakdown in the normal binding of bodily-self sensations and suggest that out-of-body feelings (OBFs) are consequences of anomalous V-M experiences and precursors to a particular form of autoscopic experience, out-of-body autoscopy (OBA). An additional finding was that vestibular and motor experiences make relatively independent contributions to OBE variance. Although OBEs are superficially consistent with universal dualistic and supernatural intuitions about the nature of the soul and its relation to the body, recent research increasingly offers plausible alternative naturalistic explanations of the relevant phenomenology.
Braithwaite, Jason J.; James, Kelly; Dewe, Hayley; Medford, Nick; Takahashi, Chie; Kessler, Klaus
2013-01-01
It has been argued that hallucinations which appear to involve shifts in egocentric perspective (e.g., the out-of-body experience, OBE) reflect specific biases in exocentric perspective-taking processes. Via a newly devised perspective-taking task, we examined whether such biases in perspective-taking were present in relation to specific dissociative anomalous body experiences (ABE) – namely the OBE. Participants also completed the Cambridge Depersonalization Scale (CDS; Sierra and Berrios, 2000) which provided measures of additional embodied ABE (unreality of self) and measures of derealization (unreality of surroundings). There were no reliable differences in the level of ABE, emotional numbing, and anomalies in sensory recall reported between the OBE and control group as measured by the corresponding CDS subscales. In contrast, the OBE group did provide significantly elevated measures of derealization (“alienation from surroundings” CDS subscale) relative to the control group. At the same time we also found that the OBE group was significantly more efficient at completing all aspects of the perspective-taking task relative to controls. Collectively, the current findings support fractionating the typically unitary notion of dissociation by proposing a distinction between embodied dissociative experiences and disembodied dissociative experiences – with only the latter being associated with exocentric perspective-taking mechanisms. Our findings – obtained with an ecologically valid task and a homogeneous OBE group – also call for a re-evaluation of the relationship between OBEs and perspective-taking in terms of facilitated disembodied experiences. PMID:24198776
Learning outcomes as a tool to assess progression.
Harden, Ronald M
2007-09-01
In the move to outcome-based education (OBE) much of the attention has focussed on the exit learning outcomes-the outcomes expected of a student at the end of a course of studies. It is important also to plan for and monitor students progression to the exit outcomes. A model is described for considering this progression through the phases of undergraduate education. Four dimensions are included-increasing breadth, increasing depth, increasing utility and increasing proficiency. The model can also be used to develop a blueprint for a more seamless link between undergraduate education, postgraduate training and continuing professional development. The progression model recognises the complexities of medical practice and medical education. It supports the move to student-centred and adaptive approaches to learning in an OBE environment.
Hildebrandt, Tom; Michaelides, Andreas; Mackinnon, Dianna; Greif, Rebecca; DeBar, Lynn; Sysko, Robyn
2017-01-01
Objective Guided self-help treatments based on cognitive-behavior therapy (CBT-GSH) are efficacious for binge eating. With limited availability of CBT-GSH in the community, mobile technology offers a means to increase use of these interventions. The purpose of this study was to test the initial efficacy of Noom Monitor, a smartphone application designed to facilitate CBT-GSH (CBT-GSH+Noom), on study retention, adherence, and eating disorder symptoms compared to traditional CBT-GSH. Method Sixty-six men and women with DSM-5 binge eating disorder (BED) or bulimia nervosa (BN) were randomized to receive 8 sessions of CBT-GSH + Noom (n = 33) or CBT-GSH (n = 33) over 12 weeks. Primary symptom outcomes were Eating Disorder Examination objective bulimic episodes (OBEs), subjective bulimic episodes (SBEs), and compensatory behaviors. Assessments were collected at 0, 4, 8, 12, 24, and 36 weeks. Behavioral outcomes were modeled using zero-inflated negative-binomial latent growth curve models with intent-to-treat. Results There was a significant effect of treatment on change in OBEs (β =−0.84, 95%CI = −1.49, −0.19) favoring CBT-GSH + Noom. Remission rates were not statistically different between treatments for OBEs (βlogit =−0.73, 95%CI = −1.86, 3.27; CBT-GSH + Noom = 17/27, 63.0% vs. CBT-GSH 11/27, 40.7%, NNT = 4.5), but CBT-GSH + Noom participants reported greater meal and snack adherence and regular meal adherence mediated treatment effects on OBEs. The treatments did not differ at the 6-month follow-up. Discussion Smartphone applications for the treatment binge eating appear to have advantages for adherence, a critical component of treatment dissemination. PMID:28960384
Hildebrandt, Tom; Michaelides, Andreas; Mackinnon, Dianna; Greif, Rebecca; DeBar, Lynn; Sysko, Robyn
2017-11-01
Guided self-help treatments based on cognitive-behavior therapy (CBT-GSH) are efficacious for binge eating. With limited availability of CBT-GSH in the community, mobile technology offers a means to increase use of these interventions. The purpose of this study was to test the initial efficacy of Noom Monitor, a smartphone application designed to facilitate CBT-GSH (CBT-GSH + Noom), on study retention, adherence, and eating disorder symptoms compared to traditional CBT-GSH. Sixty-six men and women with DSM-5 binge-eating disorder (BED) or bulimia nervosa (BN) were randomized to receive eight sessions of CBT-GSH + Noom (n = 33) or CBT-GSH (n = 33) over 12 weeks. Primary symptom outcomes were eating disorder examination objective bulimic episodes (OBEs), subjective bulimic episodes (SBEs), and compensatory behaviors. Assessments were collected at 0, 4, 8, 12, 24, and 36 weeks. Behavioral outcomes were modeled using zero-inflated negative-binomial latent growth curve models with intent-to-treat. There was a significant effect of treatment on change in OBEs (β = -0.84, 95% CI = -1.49, -0.19) favoring CBT-GSH + Noom. Remission rates were not statistically different between treatments for OBEs (β logit = -0.73, 95% CI = -1.86, 3.27; CBT-GSH-Noom = 17/27, 63.0% vs. CBT-GSH 11/27, 40.7%, NNT = 4.5), but CBT-GSH-Noom participants reported greater meal and snack adherence and regular meal adherence mediated treatment effects on OBEs. The treatments did not differ at the 6-month follow-up. Smartphone applications for the treatment binge eating appear to have advantages for adherence, a critical component of treatment dissemination. © 2017 Wiley Periodicals, Inc.
Out-of-Body Experience During Awake Craniotomy.
Bos, Eelke M; Spoor, Jochem K H; Smits, Marion; Schouten, Joost W; Vincent, Arnaud J P E
2016-08-01
The out-of-body experience (OBE), during which a person feels as if he or she is spatially removed from the physical body, is a mystical phenomenon because of its association with near-death experiences. Literature implicates the cortex at the temporoparietal junction (TPJ) as the possible anatomic substrate for OBE. We present a patient who had an out-of-body experience during an awake craniotomy for resection of low-grade glioma. During surgery, stimulation of subcortical white matter in the left TPJ repetitively induced OBEs, in which the patient felt as if she was floating above the operating table looking down on herself. We repetitively induced OBE by subcortical stimulation near the left TPJ during awake craniotomy. Diffusion tensor imaging tractography implicated the posterior thalamic radiation as a possible substrate for autoscopic phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.
Goldschmidt, Andrea B; Accurso, Erin C; Crosby, Ross D; Cao, Li; Ellison, Jo; Smith, Tracey L; Klein, Marjorie H; Mitchell, James E; Crow, Scott J; Wonderlich, Stephen A; Peterson, Carol B
2016-12-01
Although loss of control (LOC) while eating is a core construct of bulimia nervosa (BN), questions remain regarding its validity and prognostic significance independent of overeating. We examined trajectories of objective and subjective binge eating (OBE and SBE, respectively; i.e., LOC eating episodes involving an objectively or subjectively large amount of food) among adults participating in psychological treatments for BN-spectrum disorders (n = 80). We also explored whether changes in the frequency of these eating episodes differentially predicted changes in eating-related and general psychopathology and, conversely, whether changes in eating-related and general psychopathology predicted differential changes in the frequency of these eating episodes. Linear mixed models with repeated measures revealed that OBE decreased twice as rapidly as SBE throughout treatment and 4-month follow-up. Generalized linear models revealed that baseline to end-of-treatment reductions in SBE frequency predicted baseline to 4-month follow-up changes in eating-related psychopathology, depression, and anxiety, while changes in OBE frequency were not predictive of psychopathology at 4-month follow-up. Zero-inflation models indicated that baseline to end-of-treatment changes in eating-related psychopathology and depression symptoms predicted baseline to 4-month follow-up changes in OBE frequency, while changes in anxiety and self-esteem did not. Baseline to end-of-treatment changes in eating-related psychopathology, self-esteem, and anxiety predicted baseline to 4-month follow-up changes in SBE frequency, while baseline to end-of-treatment changes in depression did not. Based on these findings, LOC accompanied by objective overeating may reflect distress at having consumed an objectively large amount of food, whereas LOC accompanied by subjective overeating may reflect more generalized distress related to one's eating- and mood-related psychopathology. BN treatments should comprehensively target LOC eating and related psychopathology, particularly in the context of subjectively large episodes, to improve global outcomes. Copyright © 2016. Published by Elsevier Ltd.
Traditionalist Christians and OBE: What's the Problem?
ERIC Educational Resources Information Center
Burron, Arnold
1994-01-01
Traditionalist Christians are concerned about OBE's affective objectives and believe that schools indoctrinate children with undesirable social, political, and economic values. Environmentalism, globalism, and multiculturalism are supplanting ideas about prudent resource utilization, patriotism, and America the melting pot. Schools should offer…
Bobkova, Natalia V; Novikov, Vadim V; Medvinskaya, Natalia I; Aleksandrova, Irina Y; Nesterova, Inna V; Fesenko, Eugenii E
2018-05-17
Subchronic effect of a weak combined magnetic field (MF), produced by superimposing a constant component, 42 µT and an alternating MF of 0.08 µT, which was the sum of two frequencies of 4.38 and 4.88 Hz, was studied in olfactory bulbectomized (OBE) and transgenic Tg (APPswe, PSEN1) mice, which were used as animal models of sporadic and heritable Alzheimer's disease (AD) accordingly. Spatial memory was tested in a Morris water maze on the following day after completion of training trials with the hidden platform removed. The amyloid-β (Aβ) level was determined in extracts of the cortex and hippocampus of mice using a specific DOT analysis while the number and dimensions of amyloid plaques were detected after their staining with thioflavin S in transgenic animals. Exposure to the MFs (4 h/day for 10 days) induced the decrease of Aβ level in brain of OBE mice and reduced the number of Aβ plaques in the cortex and hippocampus of Tg animals. However, memory improvement was revealed in Tg mice only, but not in the OBE animals. Here, we suggest that in order to prevent the Aβ accumulation, MFs could be used at early stage of neuronal degeneration in case of AD and other diseases with amyloid protein deposition in other tissues.
Brownstone, Lisa M.; Bardone-Cone, Anna M.; Fitzsimmons-Craft, Ellen E.; Printz, Katherine S.; Le Grange, Daniel; Mitchell, James E.; Crow, Scott J.; Peterson, Carol B.; Crosby, Ross D.; Klein, Marjorie H.; Wonderlich, Stephen A.; Joiner, Thomas E.
2013-01-01
Objective The current study explored the clinical meaningfulness of distinguishing subjective (SBE) from objective binge eating (OBE) among individuals with threshold/subthreshold bulimia nervosa (BN). We examined relations between OBEs and SBEs and eating disorder symptoms, negative affect, and personality dimensions using both a group comparison and a continuous approach. Method Participants were 204 adult females meeting criteria for threshold/subthreshold BN who completed questionnaires related to disordered eating, affect, and personality. Results Group comparisons indicated that SBE and OBE groups did not significantly differ on eating disorder pathology or negative affect, but did differ on two personality dimensions (cognitive distortion and attentional impulsivity). Using the continuous approach, we found that frequencies of SBEs (not OBEs) accounted for unique variance in weight/shape concern, diuretic use frequency, depressive symptoms, anxiety, social avoidance, insecure attachment, and cognitive distortion. Discussion SBEs in the context of BN may indicate broader areas of psychopathology. PMID:23109272
Modeling energy expenditure in children and adolescents using quantile regression
USDA-ARS?s Scientific Manuscript database
Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obes...
ERIC Educational Resources Information Center
Holt, Maurice
1995-01-01
The central idea in W. Edwards Deming's approach to quality management is the need to improve process. Outcome-based education's central defect is its failure to address process. Deming would reject OBE along with management-by-objectives. Education is not a product defined by specific output measures, but a process to develop the mind. (MLH)
Transitions and Transformations in Philippine Physics Education Curriculum: A Case Research
ERIC Educational Resources Information Center
Morales, Marie Paz E.
2017-01-01
Curriculum, curricular transition and reform define transformational outcome-based education (OBE) in the Philippine education system. This study explores how alignment may be done with a special physics education program to suit the OBE curricular agenda for pre-service physics education, known as an outcome-based teacher education curriculum…
Services for All: Are Outcome-Based Education and Flexible School Structures the Answer?
ERIC Educational Resources Information Center
Smith, Sarah J.
1995-01-01
This paper discusses the recent controversy over outcome-based education (OBE), arguing that while OBE may be correct in establishing high standards for student learning, its implementation has tended to establish rigid "assembly line" approaches to teaching. A call is made for more flexible and individualized systems that respond to…
A randomized trial of transdermal and oral estrogen therapy in adolescent girls with hypogonadism.
Shah, Sejal; Forghani, Nikta; Durham, Eileen; Neely, E Kirk
2014-01-01
Adolescent females with ovarian failure require estrogen therapy for induction of puberty and other important physiologic effects. Currently, health care providers have varying practices without evidence-based standards, thus investigating potential differences between oral and transdermal preparations is essential. The purpose of this study was to compare the differential effects of treatment with oral conjugated equine estrogen (OCEE), oral 17β estradiol (OBE), or transdermal 17β estradiol (TBE) on biochemical profiles and feminization in girls with ovarian failure. 20 prepubertal adolescent females with ovarian failure, ages 12-18 years, were randomized to OCEE (n = 8), OBE (n = 7), or TBE (n = 5) for 24 months. Estrogen replacement was initiated at a low dose (0.15 mg OCEE, 0.25 mg OBE, or 0.0125 mg TBE) and doubled every 6 months to a maximum dose of 0.625 mg/d OCEE, 1 mg/d OBE, or 0.05 mg/d TBE. At 18 months, micronized progesterone was added to induce menstrual cycles. Biochemical markers including sex hormones, inflammatory markers, liver enzymes, coagulation factors, and lipids were obtained at baseline and 6 month intervals. Differences in levels of treatment parameters between the groups were evaluated with one-way analysis of variance (ANOVA). The effect of progesterone on biochemical markers was evaluated with the paired t-test. Mean (±SE) estradiol levels at maximum estrogen dose (18 months) were higher in the TBE group (53 ± 19 pg/mL) compared to OCEE (14 ± 5 pg/mL) and OBE (12 ± 5 pg/mL) (p ≤ 0.01). The TBE and OBE groups had more effective feminization (100% Tanner 3 breast stage at 18 months). There were no statistical differences in other biochemical markers between treatment groups at 18 months or after the introduction of progesterone. Treatment with transdermal 17β estradiol resulted in higher estradiol levels and more effective feminization compared to oral conjugated equine estrogen but did not result in an otherwise different biochemical profile in this limited number of heterogeneous patients. OBE and TBE provide safe and effective alternatives to OCEE to induce puberty in girls, but larger prospective randomized trials are required. NCT01023178.
Clinical and economic characteristics associated with type 2 diabetes.
Sicras-Mainar, A; Navarro-Artieda, R; Ibáñez-Nolla, J
2014-04-01
Type 2 diabetes mellitus (DM2) is usually accompanied by various comorbidities that can increase the cost of treatment. We are not aware of studies that have determined the costs associated with treating DM2 patients with co-morbidities such as overweight (OW), obesity (OBE) or arterial hypertension (AHT). The aim of the study was to examine the health-related costs and the incidence of cardiovascular disease (CVD) in these patients. Multicenter, observational retrospective design. We included patients 40-99 years of age who requested medical attention in 2010 in Badalona (Barcelona, Spain). There were two study groups: those with DM2 and without DM2 (reference group/control), and six subgroups: DM2-only, DM2-AHT, DM2-OW, DM2-OBE; DM2-AHT-OW and DM2-AHT-OBE. The main outcome measures were: co-morbidity, metabolic syndrome (MS), complications (hypoglycemia, CVD) and costs (health and non-health). Follow-up was carried out for two years. A total of 26,845 patients were recruited. The prevalence of DM2 was 14.0%. Subjects with DM2 were older (67.8 vs. 59.7 years) and more were men (51.3 vs. 43.0%), P<.001. DM2 status was associated primarily with OBE (OR=2.8, CI=2.4-3.1), AHT (OR=2.4, CI=2.2-2.6) and OW (OR=1.9, CI=1.7-2.2). The distribution by subgroups was: 6.7% of patients had only DM2, 26.1% had DM2, AHT and OW, and 34.1% had DM2, AHT, and OBE. Some 75.4% had MS and 37.5% reported an episode of hypoglycemia. The total cost/patient with DM2 was €4,458. By subgroups the costs were as follows: DM2: €3,431; DM2-AHT: €4,075; DM2-OW: €4,057; DM2-OBE: €4,915; DM2-AHT-OW: €4,203 and DM2-AHT-OBE: €5,021, P<.001. The CVD rate among patients with DM2 was 4.7 vs. 1.7% in those without DM2 P<.001. Obesity is a comorbidity associated with DM2 that leads to greater healthcare costs than AHT. The presence of these comorbidities causes increased rates of CVD. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Developing a Learning Outcome-Based Question Examination Paper Tool for Universiti Putra Malaysia
ERIC Educational Resources Information Center
Hassan, Sa'adah; Admodisastro, Novia Indriaty; Kamaruddin, Azrina; Baharom, Salmi; Pa, Noraini Che
2016-01-01
Much attention is now given on producing quality graduates. Therefore, outcome-based education (OBE) in teaching and learning is now being implemented in Malaysia at all levels of education especially at higher education institutions. For implementing OBE, the design of curriculum and courses should be based on specified outcomes. Thus, the…
Liehr, Martin; Mereu, Alessandro; Pastor, Jose Javier; Quintela, Jose Carlos; Staats, Stefanie; Rimbach, Gerald; Ipharraguerre, Ignacio Rodolfo
2017-01-01
Subclinical chronic inflammation (SCI) is associated with impaired animal growth. Previous work has demonstrated that olive-derived plant bioactives exhibit anti-inflammatory properties that could possibly counteract the growth-depressing effects of SCI. To test this hypothesis and define the underlying mechanism, we conducted a 30-day study in which piglets fed an olive-oil bioactive extract (OBE) and their control counterparts (C+) were injected repeatedly during the last 10 days of the study with increasing doses of Escherichia coli lipopolysaccharides (LPS) to induce SCI. A third group of piglets remained untreated throughout the study and served as a negative control (C-). In C+ pigs, SCI increased the circulating concentration of interleukin 1 beta (p < 0.001) and decreased feed ingestion (p < 0.05) and weight gain (p < 0.05). These responses were not observed in OBE animals. Although intestinal inflammation and colonic microbial ecology was not altered by treatments, OBE enhanced ileal mRNA abundance of tight and adherens junctional proteins (p < 0.05) and plasma recovery of mannitol (p < 0.05) compared with C+ and C-. In line with these findings, OBE improved transepithelial electrical resistance (p < 0.01) in TNF-α-challenged Caco-2/TC-7 cells, and repressed the production of inflammatory cytokines (p < 0.05) in LPS-stimulated macrophages. In summary, this work demonstrates that OBE attenuates the suppressing effect of SCI on animal growth through a mechanism that appears to involve improvements in intestinal integrity unrelated to alterations in gut microbial ecology and function. PMID:28346507
Dale, Vicki H M; Wieland, Barbara; Pirkelbauer, Birgit; Nevel, Amanda
2009-01-01
This study provides an overview of the perceptions of alumni in relation to their experience of open-book examinations (OBEs) as post-graduate students. This type of assessment was introduced as a way of allowing these adult learners to demonstrate their conceptual understanding and ability to apply knowledge in practice, which in theory would equip them with problem-solving skills required for the workplace. This study demonstrates that alumni-shown to be predominantly deep learners-typically regarded OBEs as less stressful than closed-book examinations, and as an effective way to assess the application of knowledge to real-life problems. Additional staff training and student induction, particularly for international students, are suggested as a means of improving the acceptability and effectiveness of OBEs.
A randomized trial of transdermal and oral estrogen therapy in adolescent girls with hypogonadism
2014-01-01
Background Adolescent females with ovarian failure require estrogen therapy for induction of puberty and other important physiologic effects. Currently, health care providers have varying practices without evidence-based standards, thus investigating potential differences between oral and transdermal preparations is essential. The purpose of this study was to compare the differential effects of treatment with oral conjugated equine estrogen (OCEE), oral 17β estradiol (OBE), or transdermal 17β estradiol (TBE) on biochemical profiles and feminization in girls with ovarian failure. Study design 20 prepubertal adolescent females with ovarian failure, ages 12–18 years, were randomized to OCEE (n = 8), OBE (n = 7), or TBE (n = 5) for 24 months. Estrogen replacement was initiated at a low dose (0.15 mg OCEE, 0.25 mg OBE, or 0.0125 mg TBE) and doubled every 6 months to a maximum dose of 0.625 mg/d OCEE, 1 mg/d OBE, or 0.05 mg/d TBE. At 18 months, micronized progesterone was added to induce menstrual cycles. Biochemical markers including sex hormones, inflammatory markers, liver enzymes, coagulation factors, and lipids were obtained at baseline and 6 month intervals. Differences in levels of treatment parameters between the groups were evaluated with one-way analysis of variance (ANOVA). The effect of progesterone on biochemical markers was evaluated with the paired t-test. Results Mean (±SE) estradiol levels at maximum estrogen dose (18 months) were higher in the TBE group (53 ± 19 pg/mL) compared to OCEE (14 ± 5 pg/mL) and OBE (12 ± 5 pg/mL) (p ≤ 0.01). The TBE and OBE groups had more effective feminization (100% Tanner 3 breast stage at 18 months). There were no statistical differences in other biochemical markers between treatment groups at 18 months or after the introduction of progesterone. Conclusions Treatment with transdermal 17β estradiol resulted in higher estradiol levels and more effective feminization compared to oral conjugated equine estrogen but did not result in an otherwise different biochemical profile in this limited number of heterogeneous patients. OBE and TBE provide safe and effective alternatives to OCEE to induce puberty in girls, but larger prospective randomized trials are required. Trial registration Clinical Trials Identifier: NCT01023178. PMID:24982681
Tan, Katherine; Chong, Mei Chan; Subramaniam, Pathmawathy; Wong, Li Ping
2018-05-01
Outcome Based Education (OBE) is a student-centered approach of curriculum design and teaching that emphasize on what learners should know, understand, demonstrate and how to adapt to life beyond formal education. However, no systematic review has been seen to explore the effectiveness of OBE in improving the competencies of nursing students. To appraise and synthesize the best available evidence that examines the effectiveness of OBE approaches towards the competencies of nursing students. A systematic review of interventional experimental studies. Eight online databases namely CINAHL, EBSCO, Science Direct, ProQuest, Web of Science, PubMed, EMBASE and SCOPUS were searched. Relevant studies were identified using combined approaches of electronic database search without geographical or language filters but were limited to articles published from 2006 to 2016, handsearching journals and visually scanning references from retrieved studies. Two reviewers independently conducted the quality appraisal of selected studies and data were extracted. Six interventional studies met the inclusion criteria. Two of the studies were rated as high methodological quality and four were rated as moderate. Studies were published between 2009 and 2016 and were mostly from Asian and Middle Eastern countries. Results showed that OBE approaches improves competency in knowledge acquisition in terms of higher final course grades and cognitive skills, improve clinical skills and nursing core competencies and higher behavioural skills score while performing clinical skills. Learners' satisfaction was also encouraging as reported in one of the studies. Only one study reported on the negative effect. Although OBE approaches does show encouraging effects towards improving competencies of nursing students, more robust experimental study design with larger sample sizes, evaluating other outcome measures such as other areas of competencies, students' satisfaction, and patient outcomes are needed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.
Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei
2016-03-09
Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.
Vannucci, Anna; Tanofsky-Kraff, Marian; Crosby, Ross D.; Ranzenhofer, Lisa M.; Shomaker, Lauren B.; Field, Sara E.; Mooreville, Mira; Reina, Samantha A.; Kozlosky, Merel; Yanovski, Susan Z.; Yanovski, Jack A.
2012-01-01
Objective We used latent profile analysis (LPA) to classify children and adolescents into subtypes based on the overlap of disinhibited eating behaviors—eating in the absence of hunger, emotional eating, and subjective and objective binge eating. Method Participants were 411 youth (8–18y) from the community who reported on their disinhibited eating patterns. A subset (n=223) ate ad libitum from two test meals. Results LPA produced five subtypes that were most prominently distinguished by objective binge eating (OBE; n=53), subjective binge eating (SBE; n=59), emotional eating (EE; n=62), a mix of emotional eating and eating in the absence of hunger (EE-EAH; n=172), and no disinhibited eating (No-DE; n=64). Accounting for age, sex, race, BMI-z, the four disinhibited eating groups had more problem behaviors than no disinhibited eating (p=.001). OBE and SBE subtypes had greater BMI-z, percent fat mass, disordered eating attitudes, and trait anxiety than EE, EAH-EE, and No-DE subtypes (ps<.01). However, the OBE subtype reported the highest eating concern (p<.001) and the OBE, SBE, and EE subtypes reported higher depressive symptoms than EE-EAH and No-DE subtypes. Across both test meals, OBE and SBE consumed less percent protein and higher percent carbohydrate than the other subtypes (ps<.02), adjusting for age, sex, race, height, lean mass, percent fat mass, and total intake. EE also consumed greater percent carbohydrate and lower percent fat compared than EE-EAH and No-DE (ps<.03). The SBE subtype consumed the least total calories (p=.01). Discussion We conclude that behavioral subtypes of disinhibited eating may be distinguished by psychological characteristics and objective eating behavior. Prospective data are required to determine whether subtypes predict the onset of eating disorders and obesity. PMID:23276121
The Impact of the 6:3 Polyunsaturated Fatty Acid Ratio on Intermediate Markers of Breast Cancer
2008-05-01
randomized clinical trial of a standard versus vegetarian diet for weight loss: the impact of treatment preference. Int J Obes. 2008; 32: 166-76. o...Burke L, Hudson A, Styn M, Warziski M, Ulci O, Sereika S. Effects of a vegetarian diet and treatment preference on biological and dietary variables in...a standard versus vegetarian diet for weight loss: the impact of treatment preference. Int J Obes. 2008; 32: 166-76. Burke L, Hudson A, Styn M
Schenk, Christopher J.; Klett, Timothy R.; Charpentier, Ronald R.; Cook, Troy A.; Pollastro, Richard M.
2006-01-01
The U.S. Geological Survey (USGS) estimated volumes of undiscovered oil and gas resources that may underlie Big South Fork National Recreation Area and Obed Wild and Scenic River in Kentucky and Tennessee. Applying the results of existing assessments of undiscovered resources from three assessment units in the Appalachian Basin Province and three plays in the Cincinnati Arch Province that include these land parcels, the USGS allocated approximately (1) 16 billion cubic feet of gas, 15 thousand barrels of oil, and 232 thousand barrels of natural gas liquids to Big South Fork National Recreation Area; and (2) 0.5 billion cubic feet of gas, 0.6 thousand barrels of oil, and 10 thousand barrels of natural gas liquids to Obed Wild and Scenic River. These estimated volumes of undiscovered resources represent potential volumes in new undiscovered fields, but do not include potential additions to reserves within existing fields.
Barman, Linda; Silén, Charlotte; Bolander Laksov, Klara
2014-12-01
This paper reports on how teachers within health sciences education translate outcome-based education (OBE) into practice when they design courses. The study is an empirical contribution to the debate about outcome- and competency-based approaches in health sciences education. A qualitative method was used to study how teachers from 14 different study programmes designed courses before and after OBE was implemented. Using an interpretative approach, analysis of documents and interviews was carried out. The findings show that teachers enacted OBE either to design for more competency-oriented teaching-learning, or to further detail knowledge and thus move towards reductionism. Teachers mainly understood the outcome-based framework as useful to support students' learning, although the demand for accountability created tension and became a bureaucratic hindrance to design for development of professional competence. The paper shows variations of how teachers enacted the same outcome-based framework for instructional design. These differences can add a richer understanding of how outcome- or competency-based approaches relate to teaching-learning at a course level.
Health-service Use in Women with Binge Eating Disorders
Dickerson, John; DeBar, Lynn; Perrin, Nancy A.; Lynch, Frances; Wilson, G. Terence; Rosselli, Francine; Kraemer, Helena C.; Striegel-Moore, Ruth H.
2014-01-01
Objective To compare health-care utilization between participants who met DSM-IV criteria for Binge Eating Disorder (BED) and those engaged in Recurrent Binge Eating (RBE) and to evaluate whether objective binge eating (OBE) days, a key measurement for diagnosing BED, predicted health-care costs. Method We obtained utilization and cost data from electronic medical records to augment patient reported data for 100 adult female members of a large health maintenance organization (HMO) who were enrolled in a randomized clinical trial to treat binge eating. Results Total costs did not differ between the BED and RBE groups (β=−0.117, z=−0.48, p=0.629), nor did the number of OBE days predictor total costs (β= −0.017, z=−1.01, p=0.313). Conclusions Findings suggest that the medical impairment, as assessed through health care costs, caused by BED may not be greater than impairment caused by RBE. The current threshold number of two OBE days/week as a criterion for BED may need to be reconsidered PMID:21823138
Baryon-Baryon Interactions ---Nijmegen Extended-Soft-Core Models---
NASA Astrophysics Data System (ADS)
Rijken, T. A.; Nagels, M. M.; Yamamoto, Y.
We review the Nijmegen extended-soft-core (ESC) models for the baryon-baryon (BB) interactions of the SU(3) flavor-octet of baryons (N, Lambda, Sigma, and Xi). The interactions are basically studied from the meson-exchange point of view, in the spirit of the Yukawa-approach to the nuclear force problem [H. Yukawa, ``On the interaction of Elementary Particles I'', Proceedings of the Physico-Mathematical Society of Japan 17 (1935), 48], using generalized soft-core Yukawa-functions. These interactions are supplemented with (i) multiple-gluon-exchange, and (ii) structural effects due to the quark-core of the baryons. We present in some detail the most recent extended-soft-core model, henceforth referred to as ESC08, which is the most complete, sophisticated, and successful interaction-model. Furthermore, we discuss briefly its predecessor the ESC04-model [Th. A. Rijken and Y. Yamamoto, Phys. Rev. C 73 (2006), 044007; Th. A. Rijken and Y. Yamamoto, Ph ys. Rev. C 73 (2006), 044008; Th. A. Rijken and Y. Yamamoto, nucl-th/0608074]. For the soft-core one-boson-exchange (OBE) models we refer to the literature [Th. A. Rijken, in Proceedings of the International Conference on Few-Body Problems in Nuclear and Particle Physics, Quebec, 1974, ed. R. J. Slobodrian, B. Cuec and R. Ramavataram (Presses Universitè Laval, Quebec, 1975), p. 136; Th. A. Rijken, Ph. D. thesis, University of Nijmegen, 1975; M. M. Nagels, Th. A. Rijken and J. J. de Swart, Phys. Rev. D 17 (1978), 768; P. M. M. Maessen, Th. A. Rijken and J. J. de Swart, Phys. Rev. C 40 (1989), 2226; Th. A. Rijken, V. G. J. Stoks and Y. Yamamoto, Phys. Rev. C 59 (1999), 21; V. G. J. Stoks and Th. A. Rijken, Phys. Rev. C 59 (1999), 3009]. All ingredients of these latter models are also part of ESC08, and so a description of ESC08 comprises all models so far in principle. The extended-soft-core (ESC) interactions consist of local- and non-local-potentials due to (i) one-boson-exchanges (OBE), which are the members of nonets of pseudo-scalar-, vector-, scalar-, and axial-mesons, (ii) diffractive (i.e. multiple-gluon) exchanges, (iii) two pseudo-scalar exchange (PS-PS), and (iv) meson-pair-exchange (MPE). The OBE- and pair-vertices are regulated by gaussian form factors producing potentials with a soft behavior near the origin. The assignment of the cutoff masses for the BBM-vertices is dependent on the SU(3)-classification of the exchanged mesons for OBE, and a similar scheme for MPE. The ESC-models ESC04 and ESC08 describe the nucleon-nucleon (NN), hyperon-nucleon (YN), and hyperon-hyperon (YY) interactions in a unified way using broken SU(3)-symmetry. Novel ingredients in the OBE-sector in the ESC-models are the inclusion of (i) the axial-vector meson potentials, (ii) a zero in the scalar- and axial-vector meson form factors. These innovations made it possible for the first time to keep the meson coupling parameters of the model qualitatively in accordance with the predictions of the (3P_0) quark-antiquark creation (QPC) model. This is also the case for the F/(F+D)-ratios. Furthermore, the introduction of the zero helped to avoid the occurrence of unwanted bound states in Lambda N. Broken SU(3)-symmetry serves to connect the NN and the YN channels, which leaves after fitting NN only a few free parameters for the determination of the YN-interactions. In particular, the meson-baryon coupling constants are calculated via SU(3) using the coupling constants of the NN oplus YN-analysis as input. In ESC04 medium strong flavor-symmetry-breaking (FSB) of the coupling constants was investigated, using the (3}P_{0) -model with a Gell-Mann-Okubo hypercharge breaking for the BBM-coupling. In ESC08 the couplings are kept SU(3)-symmetric. The charge-symmetry-breaking (CSB) in the Lambda p and Lambda n channels, which is an SU(2) isospin breaking, is included in the OBE-, TME-, and MPE-potentials. In ESC04 and ESC08 simultaneous fits to the NN- and the YN- scattering data have been achieved, using different options for the ESC-model. In particularly in ESC08 with single-sets of parameters excellent fits were obtained for the NN- and YN-data. For example, in the case of ESC08a'' we have: (i) For the selected 4233 NN-data with energies 0 ≤ T_{lab} ≤ 350 MeV, excellent results were obtained having chi(2/N_{data}) = 1.094. (ii) For the usual set of 35 YN-data and 3 Sigma(+p) cross-sections from a recent KEK-experiment E289 [H. Kanda et al., AIP Conf. Proc. 842 (2006), 501; H. Kanda, Measurement of the cross sections of Sigma(=p) elastic scattering, Ph. D. thesis, Department of Physics, Faculty of Science, Kyoto University, March 2007] the fit has chi(2}/YN_{data) ≈ 0.83. (iii) For YY there is a weak LambdaLambda-interaction, which successfully matches with t he Nagara-event [H. Takahashi et al., Phys. Rev. Lett. 87 (2001), 212502]. (iv) The nuclear Sigma and Xi well-dephts satisfy U_Sigma > 0 and U_Xi < 0. The predictions for the S = -2 (LambdaLambda, Xi N, LambdaSigma, SigmaSigma)-channels are the occurrences of an S = -2 bound states in the Xi N((3S_1-^3D_1,) I = 0,1)-channels.
A Virtual Out-of-Body Experience Reduces Fear of Death
2017-01-01
Immersive virtual reality can be used to visually substitute a person’s real body by a life-sized virtual body (VB) that is seen from first person perspective. Using real-time motion capture the VB can be programmed to move synchronously with the real body (visuomotor synchrony), and also virtual objects seen to strike the VB can be felt through corresponding vibrotactile stimulation on the actual body (visuotactile synchrony). This setup typically gives rise to a strong perceptual illusion of ownership over the VB. When the viewpoint is lifted up and out of the VB so that it is seen below this may result in an out-of-body experience (OBE). In a two-factor between-groups experiment with 16 female participants per group we tested how fear of death might be influenced by two different methods for producing an OBE. In an initial embodiment phase where both groups experienced the same multisensory stimuli there was a strong feeling of body ownership. Then the viewpoint was lifted up and behind the VB. In the experimental group once the viewpoint was out of the VB there was no further connection with it (no visuomotor or visuotactile synchrony). In a control condition, although the viewpoint was in the identical place as in the experimental group, visuomotor and visuotactile synchrony continued. While both groups reported high scores on a question about their OBE illusion, the experimental group had a greater feeling of disownership towards the VB below compared to the control group, in line with previous findings. Fear of death in the experimental group was found to be lower than in the control group. This is in line with previous reports that naturally occurring OBEs are often associated with enhanced belief in life after death. PMID:28068368
OD (Organization Development) Interventions that Enhance Equal Opportunity.
1983-09-01
aide to aaesam mod Identify by week nim obe) Socialization process Socialization model Self-esteem Organization form and structure Equal Opportunity...itself speaks to the way individuals are socialized into the Navy or a perceived lack of socialization . 7 pi ’..7 ’ .. . . 7 Today the Equal ... equal opportunity. Analysis of five different dimensions of the socialization process can be thought of as distinct "tactics" which managers (agents
Exploring in teaching mode of Optical Fiber Sensing Technology outcomes-based education (OBE)
NASA Astrophysics Data System (ADS)
Fu, Guangwei; Fu, Xinghu; Zhang, Baojun; Bi, Weihong
2017-08-01
Combining with the characteristics of disciplines and OBE mode, also aiming at the phenomena of low learning enthusiasm for the major required courses for senior students, the course of optical fiber sensing was chosen as the demonstration for the teaching mode reform. In the light of "theory as the base, focus on the application, highlighting the practice" principle, we emphasis on the introduction of the latest scientific research achievements and current development trends, highlight the practicability and practicality. By observation learning and course project, enables students to carry out innovative project design and implementation means related to the practical problems in science and engineering of this course.
Hα imaging for BeXRBs in the Small Magellanic Cloud
NASA Astrophysics Data System (ADS)
Maravelias, G.; Zezas, A.; Antoniou, V.; Hatzidimitriou, D.; Haberl, F.
2017-11-01
The Small Magellanic Cloud (SMC) hosts a large number of high-mass X-ray binaries, and in particular of Be/X-ray Binaries (BeXRBs; neutron stars orbiting OBe-type stars), offering a unique laboratory to address the effect of metalicity. One key property of their optical companion is Hα in emission, which makes them bright sources when observed through a narrow-band Hα filter. We performed a survey of the SMC Bar and Wing regions using wide-field cameras (WFI@MPG/ESO and MOSAIC@CTIO/Blanco) in order to identify the counterparts of the sources detected in our XMM-Newton survey of the same area. We obtained broad-band R and narrow-band Hα photometry, and identified ~10000 Hα emission sources down to a sensitivity limit of 18.7 mag (equivalent to ~B8 type Main Sequence stars). We find the fraction of OBe/OB stars to be 13% down to this limit, and by investigating this fraction as a function of the brightness of the stars we deduce that Hα excess peaks at the O9-B2 spectral range. Using the most up-to-date numbers of SMC BeXRBs we find their fraction over their parent population to be ~0.002 - 0.025 BeXRBs/OBe, a direct measurement of their formation rate.
NASA Astrophysics Data System (ADS)
2013-03-01
Event: UK to host Science on Stage Travel: Gaining a more global perspective on physics Event: LIYSF asks students to 'cross scientific boundaries' Competition: Young Physicists' tournament is international affair Conference: Learning in a changing world of new technologies Event: Nordic physical societies meet in Lund Conference: Tenth ESERA conference to publish ebook Meeting: Rugby meeting brings teachers together Note: Remembering John L Lewis OBE
Development and Validation of the Eating Loss of Control Scale
Blomquist, Kerstin K.; Roberto, Christina A.; Barnes, Rachel D.; White, Marney A.; Masheb, Robin M.; Grilo, Carlos M.
2014-01-01
Recurrent objective bulimic episodes (OBE) are a defining diagnostic characteristic of binge eating disorder (BED) and bulimia nervosa (BN). OBEs are characterized by experiencing loss of control (LOC) while eating an unusually large quantity of food. Despite nosological importance and complex heterogeneity across patients, measurement of LOC has been assessed dichotomously (present/absent). This study describes the development and initial validation of the Eating Loss of Control Scale (ELOCS), a self-report questionnaire that examines the complexity of the LOC construct. Participants were 168 obese treatment-seeking individuals with BED who completed the Eating Disorder Examination interview and self-report measures. Participants rated their LOC-related feelings or behaviors on continuous Likert-type scales and reported the number of LOC episodes in the past 28 days. Principal component analysis identified a single-factor, 18-item scale, which demonstrated good internal reliability (α=0.90). Frequency of LOC episodes was significantly correlated with frequency of OBEs and subjective bulimic episodes. The ELOCS demonstrated good convergent validity and was significantly correlated with greater eating pathology, greater emotion dysregulation, greater depression, and lower self-control, but not with BMI. The findings suggest that the ELOCS is a valid self-report questionnaire that may provide important clinical information regarding experiences of LOC in obese persons with BED. Future research should examine the ELOCS in other eating disorders and non-clinical samples. PMID:24219700
Herbal remedies and supplements for weight loss
Weight loss - herbal remedies and supplements; Obesity - herbal remedies; Overweight - herbal remedies ... A, Gutiérrez-Salmeán G. New dietary supplements for obesity: what we currently know. Curr Obes Rep . 2016; ...
Educational strategies for the prevention of diabetes, hypertension, and obesity.
Machado, Alexandre Paulo; Lima, Bruno Muniz; Laureano, Monique Guilharducci; Silva, Pedro Henrique Bauth; Tardin, Giovanna Pereira; Reis, Paulo Silva; Santos, Joyce Sammara; Jácomo, Domingos; D'Artibale, Eliziana Ferreira
2016-11-01
The main goal of this work was to produce a review of educational strategies to prevent diabetes, hypertension, and obesity. PubMed database was consulted using combined descriptors such as [Prevention], [Educational Activities], [Diabetes], [Hypertension], and [Obesity]. Data from randomized trials published between 2002 and 2014 were included in spreadsheets for analysis in duplicate by the reviewers. A total of 8,908 articles were found, of which 1,539 were selected about diabetes mellitus (DM, n=369), arterial systemic hypertension (ASH, n=200), and obesity (OBES, n=970). The number of free full text articles available was 1,075 (DM = 276, ASH = 118 and OBES = 681). In most of these studies, demographic characteristics such as gender and age were randomized, and the population mainly composed by students, ethnic groups, family members, pregnant, health or education professionals, patients with chronic diseases (DM, ASH, OBES) or other comorbidities. Group dynamics, physical activity practices, nutritional education, questionnaires, interviews, employment of new technologies, people training and workshops were the main intervention strategies used. The most efficient interventions occurred at community level, whenever the intervention was permanent or maintained for long periods, and relied on the continuous education of community health workers that had a constant interference inside the population covered. Many studies focused their actions in children and adolescents, especially on students, because they were more influenced by educational activities of prevention, and the knowledge acquired by them would spread more easily to their family and to society.
Management of accidental exposure to HIV: the COREVIH 2011 activity report.
Rouveix, E; Bouvet, E; Vernat, F; Chansombat, M; Hamet, G; Pellissier, G
2014-03-01
Post-exposure prophylaxis (PEP) relies on procedures allowing quick access to treatment in case of accidental exposure to viral risk (AEV). Occupational blood exposure (OBE) affects mainly caregivers; these accidents are monitored and assessed by the inter-regional center for nosocomial infections (C-CLIN), occupational physicians, and infection control units. They are classified apart from sexual exposure for which there is currently no monitoring. Data was extracted from the COREVIH (steering committee for the prevention of HIV infection) 2011 activity reports (AR), available online. Data collection was performed using a standardized grid. Twenty-four out of 28 AR were available online. Nine thousand nine hundred and twenty AEV were reported, 44% of OBE, and 56% of sexual and other exposures. PEP was prescribed in 8% of OBE and in 77% of sexual exposures. The type of PEP was documented in 52% of the cases. Follow-up was poorly documented. AR provide an incomplete and heterogeneous review of exposure management without any standardized data collection. The difficulties encountered in data collection and monitoring are due to differences in care centers (complex patient circuits, multiple actors) and lack of common dedicated software. Sexual exposures account for 50% of AEV and most are treated; but they are incompletely reported and consequently not analyzed at the regional or national level. A typical AR collection grid is being studied in 2 COREVIH, with the objective to improve collection and obtain useful national data. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Modelling with Integer Variables.
1984-01-01
Computational Comparison of * ’Equivalent’ Mixed Integer Formulations," Naval Research Logistics Quarterly 28 (1981), pp. 115- 131 . 39. R. R, Meyer and...jE(i) 3 K ".- .e I " Z A . .,.. x jCI (i) IJ ~s ;:. ... i=I 1 1X. integer A- k . . . . . . . . . . . ... . ... . . . . . . . . . o...be such that Z X.. = 1 andIfxCi’e k jcI (i) 11 13 kx m). *x + E okv . Then by putting Xil and X.=O for j* i, j£I(i) kE (2.3.4) holds. Hence S’ Pi" As
NASA Technical Reports Server (NTRS)
Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for lateral linear model parameter estimation at 30, 45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Strake (S) model and Strake/Thrust Vectoring (STV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specification of the time/amplitude points defining each input are included, along with plots of the input time histories.
'Have confidence in yourself'.
2016-06-29
Former director of nursing at Royal Brompton & Harefield NHS Foundation Trust Caroline Shuldham OBE left the NHS last year to work independently. She celebrates 45 years in nursing this year and is involved in research, teaching, mentoring, inspection and advising on care.
NASA Technical Reports Server (NTRS)
Batterson, James G. (Technical Monitor); Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.
Complementary And Alternative Medicine In The Military Health System
2017-01-01
MTFs reported using diet therapy most often for various types of chronic disease : mainly obe- sity (80 percent of MTFs), diabetes (77 percent), heart ...7 Strengths and Limitations of Our Study ........................................................................................... 8 CHAPTER...42 Training of New CAM Providers
DOT National Transportation Integrated Search
2016-05-22
This report presents recommendations for minimum DSRC device communication performance and security requirements to ensure effective operation of the DSRC system. The team identified recommended DSRC communications requirements aligned to use cases, ...
Structure-function analysis of diacylglycerol acyltransferase sequences from 70 organisms
USDA-ARS?s Scientific Manuscript database
Diacylglycerol acyltransferases (DGATs) catalyze the final and rate-limiting step of triacylglycerol (TAG) biosynthesis in eukaryotic organisms. Understanding the roles of DGATs will help to create transgenic plants with value-added properties and provide clues for therapeutic intervention for obes...
Chang, Yiting; Gable, Sara
2013-04-01
The primary objective of this study was to predict weight status stability and change across the transition to adolescence using parent reports of child and household routines and teacher and child self-reports of social-emotional development. Data were from the Early Childhood Longitudinal Study-Kindergarten Cohort (ECLS-K), a nationally representative sample of children who entered kindergarten during 1998-1999 and were followed through eighth grade. At fifth grade, parents reported on child and household routines and the study child and his/her primary classroom teacher reported on the child's social-emotional functioning. At fifth and eighth grade, children were directly weighed and measured at school. Nine mutually-exclusive weight trajectory groups were created to capture stability or change in weight status from fifth to eighth grade: (1) stable obese (ObeSta); (2) obese to overweight (ObePos1); (3) obese to healthy (ObePos2); (4) stable overweight (OverSta); (5) overweight to healthy (OverPos); (6) overweight to obese (OverNeg); (7) stable healthy (HelSta); (8) healthy to overweight (HelNeg1); and (9) healthy to obese (HelNeg2). Except for breakfast consumption at home, school-provided lunches, nighttime sleep duration, household and child routines did not predict stability or change in weight status. Instead, weight status trajectory across the transition to adolescence was significantly predicted by measures of social-emotional functioning at fifth grade. Assessing children's social-emotional well-being in addition to their lifestyle routines during the transition to adolescence is a noteworthy direction for adolescent obesity prevention and intervention. Copyright © 2013 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Thompson-Brenner, Heather; Franko, Debra L.; Thompson, Douglas R.; Grilo, Carlos M.; Boisseau, Christina L.; Roehrig, James P.; Richards, Lauren K.; Bryson, Susan W.; Bulik, Cynthia M.; Crow, Scott J.; Devlin, Michael J.; Gorin, Amy A.; Kristeller, Jean L.; Masheb, Robin; Mitchell, James E.; Peterson, Carol B.; Safer, Debra L.; Striegel, Ruth H.; Wilfley, Denise E.; Wilson, G. Terence
2014-01-01
Objective Binge eating disorder (BED) is prevalent among individuals from minority racial/ethnic groups and among individuals with lower levels of education, yet the efficacy of psychosocial treatments for these groups has not been examined in adequately powered analyses. This study investigated the relative variance in treatment retention and post-treatment symptom levels accounted for by demographic, clinical, and treatment variables as moderators and predictors of outcome. Method Data were aggregated from eleven randomized, controlled trials of psychosocial treatments for BED conducted at treatment sites across the United States. Participants were N = 1,073 individuals meeting criteria for BED including n = 946 Caucasian, n = 79 African American, and n = 48 Hispanic/Latino participants. Approximately 86% had some higher education; 85% were female. Multi-level regression analyses examined moderators and predictors of treatment retention, Eating Disorder Examination (EDE) global score, frequency of objective bulimic episodes (OBEs), and OBE remission. Results Moderator analyses of race/ethnicity and education were non-significant. Predictor analyses revealed African Americans were more likely to drop out of treatment than Caucasians, and lower level of education predicted greater post-treatment OBEs. African Americans showed a small but significantly greater reduction in EDE global score relative to Caucasians. Self-help treatment administered in a group showed negative outcomes relative to other treatment types, and longer treatment was associated with better outcome. Conclusions Observed lower treatment retention among African Americans and lesser treatment effects for individuals with lower levels of educational attainment are serious issues requiring attention. Reduced benefit was observed for shorter treatment length and self-help administered in groups. PMID:23647283
Cognitive-behavioral therapy for subthreshold bulimia nervosa: A case series.
Peterson, C B; Miller, K B; Willer, M G; Ziesmer, J; Durkin, N; Arikian, A; Crow, S J
2011-09-01
The extent to which cognitive-behavioral therapy (CBT) is helpful in treating individuals with bulimic symptoms who do not meet full criteria for bulimia nervosa is unclear. The purpose of this investigation was to examine the potential efficacy of CBT for eating disorder individuals with bulimic symptoms who do not meet full criteria for bulimia nervosa. Twelve participants with subthreshold bulimia nervosa were treated in a case series with 20 sessions of CBT. Ten of the 12 participants (83.3%) completed treatment. Intent-to-treat abstinent percentages were 75.0% for objectively large episodes of binge eating (OBEs), 33.3% for subjectively large episodes of binge eating (SBEs), and 50% for purging at end of treatment. At one year follow-up, 66.7% were abstinent for OBEs, 41.7% for SBEs, and 50.0% for purging. The majority also reported improvements in associated symptoms. This case series provides support for the use of CBT with individuals with subthreshold bulimia nervosa.
Flight Test of the F/A-18 Active Aeroelastic Wing Airplane
NASA Technical Reports Server (NTRS)
Voracek, David
2007-01-01
A viewgraph presentation of flight tests performed on the F/A active aeroelastic wing airplane is shown. The topics include: 1) F/A-18 AAW Airplane; 2) F/A-18 AAW Control Surfaces; 3) Flight Test Background; 4) Roll Control Effectiveness Regions; 5) AAW Design Test Points; 6) AAW Phase I Test Maneuvers; 7) OBES Pitch Doublets; 8) OBES Roll Doublets; 9) AAW Aileron Flexibility; 10) Phase I - Lessons Learned; 11) Control Law Development and Verification & Validation Testing; 12) AAW Phase II RFCS Envelopes; 13) AAW 1-g Phase II Flight Test; 14) Region I - Subsonic 1-g Rolls; 15) Region I - Subsonic 1-g 360 Roll; 16) Region II - Supersonic 1-g Rolls; 17) Region II - Supersonic 1-g 360 Roll; 18) Region III - Subsonic 1-g Rolls; 19) Roll Axis HOS/LOS Comparison Region II - Supersonic (open-loop); 20) Roll Axis HOS/LOS Comparison Region II - Supersonic (closed-loop); 21) AAW Phase II Elevated-g Flight Test; 22) Region I - Subsonic 4-g RPO; and 23) Phase II - Lessons Learned
Another Breakthrough, Another Baby Thrown out with the Bathwater
ERIC Educational Resources Information Center
Bell, David M.
2009-01-01
"Process-oriented pedagogy: facilitation, empowerment, or control?" claims that process-oriented pedagogy (POP) represents the methodological perspective of most practising teachers and that outcomes-based education (OBE) poses a real and present danger to stakeholder autonomy. Whereas POP may characterize methodological practices in the inner…
Outcomes-Based Education Integration in Home Economics Program: An Evaluative Study
ERIC Educational Resources Information Center
Limon, Mark Raguindin; Castillo Vallente, John Paul
2016-01-01
This study examined the factors that affect the integration of Outcomes-Based Education (OBE) in the Home Economics (HE) education curriculum of the Technology and Livelihood Education (TLE) program of a State University in the northern part of the Philippines. Descriptive survey and qualitative design were deployed to gather, analyze, and…
Physical Education, Sport and Recreation: A Triad Pedagogy of Hope
ERIC Educational Resources Information Center
van Deventer, K. J.
2011-01-01
Bloch (2009, 58), a previous advocate of Outcomes-based Education (OBE), states that "schooling in SA" is a national disaster. Quality holistic education that includes Physical Education (PE) and school sport should be the focal point of progress in developing countries. However, PE is worldwide in a political crisis and the situation is…
ERIC Educational Resources Information Center
Halbleib, Mary L.; Jepson, Paul C.
2015-01-01
Purpose: This paper examines the benefits of using an outcome-based education (OBE) method within agricultural extension outreach programmes for professional and farmer audiences. Design/Methodology/Approach: The method is elaborated through two practical examples, which show that focused, short-duration programmes can produce meaningful skill…
Sexuality Education in South Africa: Three Essential Questions
ERIC Educational Resources Information Center
Francis, Dennis A.
2010-01-01
Sex education is the cornerstone on which most HIV/AIDS prevention programmes rest and since the adoption of Outcomes-Based Education (OBE), has become a compulsory part of the South African school curriculum through the Life Orientation learning area. However, while much focus has been on providing young people with accurate and frank information…
Student Teachers' Views: What Is an Interesting Life Sciences Curriculum?
ERIC Educational Resources Information Center
de Villiers, Rian
2011-01-01
In South Africa, the Grade 12 "classes of 2008 and 2009" were the first to write examinations under the revised Life Sciences (Biology) curriculum which focuses on outcomes-based education (OBE). This paper presents an exploration of what students (as learners) considered to be difficult and interesting in Grades 10-12 Life Sciences…
Hernández-Guerrero, César; Romo-Palafox, Inés; Díaz-Gutiérrez, Mary Carmen; Iturbe-García, Mariana; Texcahua-Salazar, Alejandra; Pérez-Lizaur, Ana Bertha
2013-11-01
Oxidative stress is a key factor in the development of the principal comorbidities of obesity. Methylenetetrahydrofolate reductase enzyme (MTHFR) participates in the metabolism of folate with the action of vitamins B6 and B12. The gene of MTHFR may present a single nucleotide polymorphism (SNP) at position 677 (C677T), which can promote homocysteinemia associated to the production of free radicals. To determine the frequency of SNP C677T of the MTHFR, evaluate the consumption of vitamins B6, B9, B12 and determine the concentration of plasma lipid hydroperoxides (LOOH) in obese and control groups. 128 Mexican mestizo according to their body mass index were classified as normal weight (Nw; n=75) and obesity (ObeI-III; n=53). Identification of SNP C677T of MTHFR was performed by PCR-RFLP technic. The consumption of vitamins B6, B9 and B12 was assessed by a validate survey. LOOH was determined as an indicator of peripheral oxidative stress. There was no statistical difference in the frequency of the C677T polymorphism between the TT homozygous genotype in Nw (0.19) and ObeI-III (0.25). The frequency of T allele in Nw was 0.45 and 0.51 in ObI-III group. There were no statistical differences in the consumption of vitamins B6, B9 and B12 between Nw and ObI-III groups. The LOOH showed statistical difference (p < 0.05) between Nw and ObI–III group. Oxidative stress is present in all grades of obesity although there were no differences in the vitamin consumption and the SNP C677T between Nw and ObeI–III groups. Copyright AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.
Combined Induction of Rubber-Hand Illusion and Out-of-Body Experiences
Olivé, Isadora; Berthoz, Alain
2012-01-01
The emergence of self-consciousness depends on several processes: those of body ownership, attributing self-identity to the body, and those of self-location, localizing our sense of self. Studies of phenomena like the rubber-hand illusion (RHi) and out-of-body experience (OBE) investigate these processes, respectively for representations of a body-part and the full-body. It is supposed that RHi only target processes related to body-part representations, while OBE only relates to full-body representations. The fundamental question whether the body-part and the full-body illusions relate to each other is nevertheless insufficiently investigated. In search for a link between body-part and full-body illusions in the brain we developed a behavioral task combining adapted versions of the RHi and OBE. Furthermore, for the investigation of this putative link we investigated the role of sensory and motor cues. We established a spatial dissociation between visual and proprioceptive feedback of a hand perceived through virtual reality in rest or action. Two experimental measures were introduced: one for the body-part illusion, the proprioceptive drift of the perceived localization of the hand, and one for the full-body illusion, the shift in subjective-straight-ahead (SSA). In the rest and action conditions it was observed that the proprioceptive drift of the left hand and the shift in SSA toward the manipulation side are equivalent. The combined effect was dependent on the manipulation of the visual representation of body parts, rejecting any main or even modulatory role for relevant motor programs. Our study demonstrates for the first time that there is a systematic relationship between the body-part illusion and the full-body illusion, as shown by our measures. This suggests a link between the representations in the brain of a body-part and the full-body, and consequently a common mechanism underpinning both forms of ownership and self-location. PMID:22675312
ERIC Educational Resources Information Center
Morcke, Anne Mette; Dornan, Tim; Eika, Berit
2013-01-01
Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the…
Implications of Outcomes-Based Education for Children with Disabilities. Synthesis Report 6.
ERIC Educational Resources Information Center
Thurlow, Martha L.
This paper examines the concept of "outcomes-based education" (OBE), how it was developed, how it relates to other current reforms that encompass the notion of outcomes, and how it relates to students with disabilities in theory and in practice. Outcomes-based education holds that all children can learn and succeed and that schools are…
Policy Enacted--Teachers' Approaches to an Outcome-Based Framework for Course Design
ERIC Educational Resources Information Center
Barman, Linda; Bolander-Laksov, Klara; Silén, Charlotte
2014-01-01
In this paper, we report on how teachers in Higher Education enact policy. Outcome-based education (OBE) serves as an example of a governmental educational policy introduced with the European Bologna reform. With a hermeneutic approach, we have studied how 14 teachers interpreted this policy and re-designed their courses. The findings show…
USDA-ARS?s Scientific Manuscript database
Malnutrition during the fetal growth period increases risk for later obesity and type 2 diabetes mellitus (T2DM). We have shown that a prenatal low protein (8% protein; LP) diet followed by postnatal high fat (45% fat; HF) diet results in offspring propensity for adipose tissue catch-up growth, obes...
The Implementation of the New Lower Secondary Science Curriculum in Three Schools in Rwanda
ERIC Educational Resources Information Center
Nsengimana, Théophile; Ozawa, Hiroaki; Chikamori, Kensuke
2014-01-01
In 2006, Rwanda began implementing an Outcomes Based Education (OBE) lower secondary science curriculum that emphasises a student-centred approach. The new curriculum was designed to transform Rwandan society from an agricultural to a knowledge-based economy, with special attention to science and technology education. Up until this point in time…
Outcome-Based Education and Student Learning in Managerial Accounting in Hong Kong
ERIC Educational Resources Information Center
Lui, Gladie; Shum, Connie
2012-01-01
Although Outcome-based Education has not been successful in public education in several countries, it has been successful in the medical fields in higher education in the U.S. The author implemented OBE in her Managerial Accounting course in H.K. Intended learning outcomes were mapped again Bloom's Cognitive Domain. Teaching and learning…
Charles B. Sims; Donald G. Hodges; Del Scruggs
2004-01-01
Rural economies in many parts of the United States have undergone significant changes over the past two decades. Faltering economies historically based on traditional economic sectors like agriculture and manufacturing are transitioning to retail and service sectors to support growth. One example of such an industry is resource-based recreation and tourism. Tourists...
ERIC Educational Resources Information Center
de Jager, H. J.; Nieuwenhuis, F. J.
2005-01-01
South Africa has embarked on a process of education renewal by adopting outcomes-based education (OBE). This paper focuses on the linkages between total quality management (TQM) and the outcomes-based approach in an education context. Quality assurance in academic programmes in higher education in South Africa is, in some instances, based on the…
USDA-ARS?s Scientific Manuscript database
Objective: Endocannabinoid system (ECS) overactivation is associated with increased adiposity and likely contributes to type II diabetes risk. Elevated tissue cannabinoid receptor 1 (CB1) and circulating endocannabinoids derived from the n-6 polyunsaturated acid (PUFA) arachidonic acid occur in obes...
ERIC Educational Resources Information Center
Singh, P.
2011-01-01
Because of its history from apartheid to democracy, the aspiration to reform schools is a recurrent theme in South African education. Efforts to reform education in schools based on the outcomes-based education (OBE) curriculum approach created major challenges for policy makers in South Africa. The purpose of this exploratory research was…
NASA Astrophysics Data System (ADS)
Salleh, I. Mohd; Mat Rani, M.
2017-12-01
This paper aims to discuss the effectiveness of the Learning Outcome Attainment Measurement System in assisting Outcome Based Education (OBE) for Aviation Engineering Higher Education in Malaysia. Direct assessments are discussed to show the implementation processes that become a key role in the successful outcome measurement system. A case study presented in this paper involves investigation on the implementation of the system in Aircraft Structure course for Bachelor in Aircraft Engineering Technology program in UniKL-MIAT. The data has been collected for five semesters, starting from July 2014 until July 2016. The study instruments used include the report generated in Learning Outcomes Measurements System (LOAMS) that contains information on the course learning outcomes (CLO) individual and course average performance reports. The report derived from LOAMS is analyzed and the data analysis has revealed that there is a positive significant correlation between the individual performance and the average performance reports. The results for analysis of variance has further revealed that there is a significant difference in OBE grade score among the report. Independent samples F-test results, on the other hand, indicate that the variances of the two populations are unequal.
Altus AFB, Oklahoma Revised Uniform Summary of Surface Weather Observations (RUSSWO). Parts A-F.
1985-09-01
4 _ 2; 4, InI Air Weather Service ( MAC) Aft 1 REVISED UNIFOCM SUMMAARY CW SC IL!8k 2 SURFACE WATHER OBE3RVATION$ 2b1l__ ALTUS m~F3 OK.MC 732 4 40 99...BRANCH PERCENIA6E FRECQUENCY OF OCCURRENCE OF CEILING VERSUS VISIBILIIV USAFTEAC FRON HOURLY OBSERVATIONS AIR WATHER SERVICE/HAC STATION NUMBER: 123520
ERIC Educational Resources Information Center
Baldwin, George
California State University Monterey Bay (CSUMB) is the newest university in the CSU system. CSUMB's vision statement distinguishes the institution from others in the system by promoting learning paradigms of Outcome Based Education (OBE) and communication technologies of distributed learning (DL). Faculty are committed to the experimental use of…
Reflective mirrors: perspective-taking in autoscopic phenomena.
Brugger, Peter
2002-08-01
''Autoscopic phenomena refer to different illusory reduplications of one's own body and self. This article proposes a phenomenological differentiation of autoscopic reduplication into three distinct classes, i.e., autoscopic hallucinations, heautoscopy, and out-of-body experiences (OBEs). Published cases are analysed with special emphasis on the subject's point of view from which the reduplication is observed. In an autoscopic hallucination the observer's perspective is clearly body-centred, and the visual image of one's own body appears as a mirror reversal. Heautoscopy (i.e., the encounter with an alter ego or doppelgänger), is defined as a reduplication not only of bodily appearance, but also of aspects of one's psychological self. The observer's perspective may alternate between egocentric and ''alter-ego-centred''. As a consequence of the projection of bodily feelings into the doppelgänger (implying a mental rotation of one's own body along the vertical axis), original and reduplicated bodies are not mirror images of one another. This also holds for OBEs, where one's self is not reduplicated but appears to be completely dissociated from the body and observing it from a location in extracorporeal space. It is argued that perspective-taking in a spatial sense may be meaningfully related to perspective-taking in a psychological sense. The mirror in the autoscopic hallucination is a ''cognitively nonreflective mirror'' (Jean Cocteau), both spatially and psychologically. The reflective abilities of the heautoscopic mirror are better developed, yet frequent shifts in the observer's spatial perspective render the nature of psychological interactions between self and alter ego highly unpredictable. The doppelgänger may serve a transitivistic (i.e., own suffering is transferred to the alter ego) or aggressive function when this behaviour is directed against a patient. The mirror in an OBE is always reflective: It allows the self to view both space and one's psychological state from a detached but stable perspective. Spatial perspective-taking should be more thoroughly assessed in patients reporting autoscopic phenomena. By elucidating the interactions between spatial phenomenology and psychological function, we may gain important insights into the relationships between the self, its body, and phenomenal space.
Immunochemistry of Rat Lung Tumorigenesis
1983-01-01
in the autochthonous host ( Prehn , 1957; Klein et al., 1960). A large number of chemically induced tumors were shown to be antigenic (Baldwin, 1967... Prehn , 1962). Even tumors induced by physical means such as ultraviolet radiation possess neoantigens although their antigenicity is weak (Klein et al...Immune System, Raven Press, New York. Basoinbrio, M.A., and Prehn , R.T. (1972). Cancer Res. 32:2545-2550. Beck, B., and Obe, G. (1975). Humangenetik 29
de Zwaan, Martina; Herpertz, Stephan; Zipfel, Stephan; Svaldi, Jennifer; Friederich, Hans-Christoph; Schmidt, Frauke; Mayr, Andreas; Lam, Tony; Schade-Brittinger, Carmen; Hilbert, Anja
2017-10-01
Although cognitive behavioral therapy (CBT) represents the criterion standard for treatment of binge eating disorder (BED), most individuals do not have access to this specialized treatment. To evaluate the efficacy of internet-based guided self-help (GSH-I) compared with traditional, individual face-to-face CBT. The Internet and Binge Eating Disorder (INTERBED) study is a prospective, multicenter, randomized, noninferiority clinical trial (treatment duration, 4 months; follow-ups, 6 months and 1.5 years). A volunteer sample of 178 adult outpatients with full or subsyndromal BED were recruited from 7 university-based outpatient clinics from August 1, 2010, through December 31, 2011; final follow-up assessment was in April 2014. Data analysis was performed from November 30, 2014, to May 27, 2015. Participants received 20 individual face-to-face CBT sessions of 50 minutes each or sequentially completed 11 internet modules and had weekly email contacts. The primary outcome was the difference in the number of days with objective binge eating episodes (OBEs) during the previous 28 days between baseline and end of treatment. Secondary outcomes included OBEs at follow-ups, eating disorder and general psychopathologic findings, body mass index, and quality of life. A total of 586 patients were screened, 178 were randomized, and 169 had at least one postbaseline assessment and constituted the modified intention-to-treat analysis group (mean [SD] age, 43.2 [12.3] years; 148 [87.6%] female); the 1.5-year follow-up was available in 116 patients. The confirmatory analysis using the per-protocol sample (n = 153) failed to show noninferiority of GSH-I (adjusted effect, 1.47; 95% CI, -0.01 to 2.91; P = .05). Using the modified intention-to-treat sample, GSH-I was inferior to CBT in reducing OBE days at the end of treatment (adjusted effect, 1.63; 95% CI, 0.17-3.05; P = .03). Exploratory longitudinal analyses also showed the superiority of CBT over GSH-I by the 6-month (adjusted effect, 0.36; 95% CI, 0.23-0.55; P < .001) but not the 1.5-year follow-up (adjusted effect, 0.91; 95% CI, 0.54-1.50; P = .70). Reductions in eating disorder psychopathologic findings were significantly higher in the CBT group than in the GSH-I group at 6-month follow-up (adjusted effect, -0.4; 95% CI, -0.68 to -0.13; P = .005). No group differences were found for body mass index, general psychopathologic findings, and quality of life. Face-to-face CBT leads to quicker and greater reductions in the number of OBE days, abstinence rates, and eating disorder psychopathologic findings and may be a better initial treatment option than GSH-I. Internet-based guided self-help remains a viable, slower-acting, low-threshold treatment alternative compared with CBT for adults with BED. isrctn.org Identifier: ISRCTN40484777 and germanctr.de Identifier: DRKS00000409.
Long-term infusions of ghrelin and obestatin in early lactation dairy cows.
Roche, J R; Sheahan, A J; Chagas, L M; Blache, D; Berry, D P; Kay, J K
2008-12-01
Ghrelin is an endogenous ligand of the growth hormone secretagogue receptor and a potential orexigenic agent in monogastrics and ruminants. Obestatin has been reported to have the opposite (anorexigenic) effect. Fifty one multiparous cows were randomly allocated to 1 of 3 groups (n = 17): a control group and 2 groups with cows continuously infused with 0.74 mumol/d of ghrelin (GHR group) or obestatin (OBE group) subcutaneously. Infusions began 21 d in milk, and treatments continued for 8 wk. Generalized linear models were used to determine the treatment effect on average daily and cumulative milk production and composition, and plasma ghrelin, growth hormone, insulin-like growth factor (IGF)-1, leptin, nonesterified fatty acids, and glucose. Mixed models, with cow included as a repeated effect, were used to determine if treatment effects differed by week postcalving for milk production, body weight, and body condition score (BCS; scale 1 to 10). Parity, breed, week of the year at calving, treatment, week postcalving, and the 2 wk preexperimental average of each measure (covariate) were included as fixed effects. Treatment did not affect dry matter intake. Cows infused with GHR lost more BCS (-0.71 units) over the 8-wk study period than the control (-0.23 BCS units) cows, and on average were thinner than cows in either of the other 2 treatments (0.2 BCS units). Consistent with the extra BCS loss in GHR cows, plasma IGF-1, glucose, and leptin concentrations were reduced and plasma nonesterified fatty acid concentrations were greater in GHR cows. Despite a numerical tendency for GHR cows to produce more milk (1,779 kg) than control (1,681 kg) or OBE (1,714 kg) cows during the 8-wk period, milk production differences were not statistically different. However, the timing of the numerical separation of the lactation curves coincided with the significant changes in BCS, IGF-1, and leptin. Results indicate a positive effect of ghrelin infusion on lipolysis. Further research is required to determine if the numerical increase in milk production, which coincides with the increased negative energy balance, is real.
An Implantable Neuroprosthetic Device to Normalize Bladder Function after SCI
2014-12-01
intermittent vagal block using an implantable medical device. Surgery for Obesity and Related Diseases 5, 224-230. 8. Frankenhaeuser B (1960...of vagal blockade to induce weight loss in morbid obesity . Obes Surg 2012;22:1771–1782. 15. Waataja JJ, Tweden KS, Honda CN. Effects of high-frequency...and Rosenblueth 1939; Rosenblueth and Reboul 1939). Recently this nerve block method has been applied to treat obesity (Camilleri et al. 2009; Wattaja
ERIC Educational Resources Information Center
Desmond, Cheryl Taylor
In Johnson City, New York, the schools have sustained positive, meaningful educational change since 1964. The Johnson City schools have also given birth to the national movement of Outcome-Based Education (OBE). This book provides a cultural history of the relationship between community and school in school reform. The book describes the…
2014-03-27
simulant similar in structure to sarin (Obee and Satyapal, 1998). Literature on the biodegradation of DMMP is limited. In 2005, the DMMP Consortium...undergoes fermentation to acetate and hydrogen. Other 9 substrates, such as such sugars, may ferment to ethanol first. Current production occurs from...the ARB utilization of the fermentation product acetate, but electrons are lost in the form of hydrogen to methanogenesis. Therefore, the current
Medical Surveillance Monthly Report
2016-10-01
women aged 30–70 years suffering from OSA.2 The prevalence of OSA has been ris- ing and is associated with changing obe- sity prevalence, as obesity is...information on the burden of disease in military subpopulations and the associa- tion of obesity with OSA. M E T H O D S The surveillance period for this...of the Defense Manpower Data Center), marital status, and obesity sta- tus. To calculate obese person-time dur- ing the surveillance period
Cooke, Colin A; Schwindt, Colin; Davies, Martin; Donahue, William F; Azim, Ekram
2016-07-01
On October 31, 2013, a catastrophic release of approximately 670,000m(3) of coal process water occurred as the result of the failure of the wall of a post-processing settling pond at the Obed Mountain Mine near Hinton, Alberta. A highly turbid plume entered the Athabasca River approximately 20km from the mine, markedly altering the chemical composition of the Athabasca River as it flowed downstream. The released plume traveled approximately 1100km downstream to the Peace-Athabasca Delta in approximately four weeks, and was tracked both visually and using real-time measures of river water turbidity within the Athabasca River. The plume initially contained high concentrations of nutrients (nitrogen and phosphorus), metals, and polycyclic aromatic hydrocarbons (PAHs); some Canadian Council of Ministers of the Environmental (CCME) Guidelines were exceeded in the initial days after the spill. Subsequent characterization of the source material revealed elevated concentrations of both metals (arsenic, lead, mercury, selenium, and zinc) and PAHs (acenaphthene, fluorene, naphthalene, phenanthrene, and pyrene). While toxicity testing using the released material indicated a relatively low or short-lived acute risk to the aquatic environment, some of the water quality and sediment quality variables are known carcinogens and have the potential to exert negative long-term impacts. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Relativistic extended Thomas-Fermi calculations with exchange term contributions
NASA Astrophysics Data System (ADS)
Haddad, S.; Weigel, M. K.
1994-10-01
In this investigation we present self-consistent relativistic extended Thomas-Fermi (ETF) and extended Thomas-Fermi-Fock (ETFF) approaches, derived from the semiclassical treatment of the relativistic nuclear Hartree-Fock problem. The approximations are used to describe the ground-state properties of finite nuclei. The resulting equations are solved numerically for several one-boson-exchange (OBE) lagrangians. The results are discussed and compared with the outcome of full quantal Hartree and Hartree-Fock calculations, other semiclassical treatments and experimental data.
Summaries of FY 1994 geosciences research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-12-01
The Geosciences Research Program is directed by the Department of Energy`s (DOE`s) Office of Energy Research (OER) through its Office of Basic Energy Sciences (OBES). Activities in the Geosciences Research Program are directed toward the long-term fundamental knowledge of the processes that transport, modify, concentrate, and emplace (1) the energy and mineral resources of the earth and (2) the energy byproducts of man. The Program is divided into five broad categories: Geophysics and earth dynamics; Geochemistry; Energy resource recognition, evaluation, and utilization; Hydrogeology and exogeochemistry; and Solar-terrestrial interactions. The summaries in this document, prepared by the investigators, describe the scopemore » of the individual programs in these main areas and their subdivisions including earth dynamics, properties of earth materials, rock mechanics, underground imaging, rock-fluid interactions, continental scientific drilling, geochemical transport, solar/atmospheric physics, and modeling, with emphasis on the interdisciplinary areas.« less
A Simple Technique for Jejunojejunal Revision in Laparoscopic Roux-en-Y Gastric Bypass.
Spivak, Hadar
2015-12-01
The lengths of the bypassed segments in the initial laparoscopic roux-en-Y gastric bypass (LRYGB) are usually a matter of the individual surgeon's routine. The literature is inconclusive about the association between the Roux limbs' length and weight-loss or malabsorption (Stefanidis et al. Obes Surg. 21(1):119-24, 2011); (Rawlins et al. Surg Obes Relat Dis. 7(1):45-9, 2011). However, jejunojejunal anastomosis (JJ) "redo" and Roux limb length revision could be considered for patients with a very short Roux limb and weight loss failure or for short common channel and malabsorption. Complications of JJ may also require revision. In over 1000 LRYGBs since 2001, eight patients required JJ revision for failure to lose enough weight (n = 6), malabsorption (n = 1), and stricture (n = 1). Instead of completely taking down the JJ, a simple technique was evolved to keep the enteric limb continuity. In a following step, the biliopancreatic limbs have been transected from the JJ and reconnected proximal (for malabsorption) or distal (for weight loss failure). In this video, a step-by-step the laparoscopic technique for JJ revision and relocating the biliopancreatic limb is presented. Procedure takes 40-60 min to perform using four trocars and the hospital stay was 1-2 nights. No complications occurred during the procedures or postoperative period. Laparoscopic revision of JJ is feasible and safe and should be part of surgeons' options on the long-term management of patients post LRYGB.
Modeling of beryllium sputtering and re-deposition in fusion reactor plasma facing components
NASA Astrophysics Data System (ADS)
Zimin, A. M.; Danelyan, L. S.; Elistratov, N. G.; Gureev, V. M.; Guseva, M. I.; Kolbasov, B. N.; Kulikauskas, V. S.; Stolyarova, V. G.; Vasiliev, N. N.; Zatekin, V. V.
2004-08-01
Quantitative characteristics of Be-sputtering by hydrogen isotope ions in a magnetron sputtering system, the microstructure and composition of the sputtered and re-deposited layers were studied. The energies of H + and D + ions varied from 200 to 300 eV. The ion flux density was ˜3 × 10 21 m -2 s -1. The irradiation doses were up to 4 × 10 25 m -2. For modeling of the sputtered Be-atom re-deposition at increased deuterium pressures (up to 0.07 torr), a mode of operation with their effective return to the Be-target surface was implemented. An atomic ratio O/Be ≅ 0.8 was measured in the re-deposited layers. A ratio D/Be decreases from 0.15 at 375 K to 0.05 at 575 K and slightly grows in the presence of carbon and tungsten. The oxygen concentration in the sputtered layers does not exceed 3 at.%. The atomic ratio D/Be decreases there from 0.07 to 0.03 at target temperatures increase from 350 to 420 K.
A Mixing Theory for the Interaction between Dissipative Flows and Nearly-Isentropic Streams
1952-01-15
mTVEfOBEE? "- - AEROmUT-IG-AL ENGINEERING i^ORAjPORY i January 15j t--:l A- • *-,- ß» -*- •AbiMÖWEEEGEMSHT - a "I- " Es A- major portion...incident oblique shock, airfoil -chord -_- _ ’ - . - - •airf oil thickness at: -trailing edge •t v:il -:na Es : EeynöiJidG number,. &;^e...von Karman momentum integral for the. dissipatiVe flow region,, where.;, however> this internal flow is treated, as quäai- Que ."uiU4euBi!oncUt
Death and consciousness--an overview of the mental and cognitive experience of death.
Parnia, Sam
2014-11-01
Advances in resuscitation science have indicated that, contrary to perception, death by cardiorespiratory criteria can no longer be considered a specific moment but rather a potentially reversible process that occurs after any severe illness or accident causes the heart, lungs, and brain to stop functioning. The resultant loss of vital signs of life (and life processes) is used to declare a specific time of death by physicians globally. When medical attempts are made to reverse this process, it is commonly referred to as cardiac arrest; however, when these attempts do not succeed or when attempts are not made, it is called death by cardiorespiratory criteria. Thus, biologically speaking, cardiac arrest and death by cardiorespiratory criteria are synonymous. While resuscitation science has provided novel opportunities to reverse death by cardiorespiratory criteria and treat the potentially devastating consequences of the resultant postresuscitation syndrome, it has also inadvertently provided intriguing insights into the likely mental and cognitive experience of death. Recollections reported by millions of people in relation to death, so-called out-of-body experiences (OBEs) or near-death experiences (NDEs), are often-discussed phenomena that are frequently considered hallucinatory or illusory in nature; however, objective studies on these experiences are limited. To date, many consistent themes corresponding to the likely experience of death have emerged, and studies have indicated that the scientifically imprecise terms of NDE and OBE may not be sufficient to describe the actual experience of death. While much remains to be discovered, the recalled experience surrounding death merits a genuine scientific investigation without prejudice. © 2014 New York Academy of Sciences.
Isospin flip as a relativistic effect: NN interactions
NASA Technical Reports Server (NTRS)
Buck, W. W.
1993-01-01
Results are presented of an analytic relativistic calculation of a OBE nucleon-nucleon (NN) interaction employing the Gross equation. The calculation consists of a non-relativistic reduction that keeps the negative energy states. The result is compared to purely non-relativistic OBEP results and the relativistic effects are separated out. One finds that the resulting relativistic effects are expressable as a power series in (tau(sub 1))(tau(sub 2)) that agrees, qualitatively, with NN scattering. Upon G-parity transforming this NN potential, one obtains, qualitatively, a short range NN spectroscopy in which the S-states are the lowest states.
Software Engineering Education.
1987-05-01
A-D-AI82bN3 JO U4WR El 14N NWRC j MCA! CIo~~l-MN~J02Ua~ 1/1 NYSI/ EI- -TR- SWT -8 -L U NCLA SSI F IED PAYF/G L2/5 MI. 1.0I2 W 136 2’ UN,, - mll I m...tghteeight aurriuls . empae ta t i ohe An additional refinement of the curriculum content can bem aterial taught m ight also be taught in co urses w hose a...descriptions of possible courses. Bloo -. [Bioom56] has defined a taxonomy of educational Software System Clsae. Several different classes can obe
Coordinated Research Program in Pulsed Power Physics.
1985-12-20
8217). Stale. and ZIP Code) 10 SOURCE OF FUNDING NOS. PROGRAM PROJECT TASK WORK UNIT ELE ME NT NO. NO. NO. No. 11.?ILE.ic.ecufC~sjf~aton 1 c 61102F 2301 A7 12...SYMBOLI lncludr Arra Code) 5" Major B. Smith j202/767-4908 AFOSR/NP FORM 1473. E3 APR EDITION OF I..AN 73 IS OBeCLETE Unclassified SEC A17 C! ww...fields at localized points in pulsed power systems*. In addition, as in previous years, new projects will be added as new ideas are generated. Funds for
Conceição, Eva; Mitchell, James E; Vaz, Ana R; Bastos, Ana P; Ramalho, Sofia; Silva, Cátia; Cao, Li; Brandão, Isabel; Machado, Paulo P P
2014-12-01
Maladaptive eating behaviors after bariatric surgery are thought to compromise weight outcomes, but little is known about their frequency over time. This study investigates the presence of subjective binge eating (SBE), objective binge eating (OBE) and picking and nibbling (P&N) before surgery and at different time periods postoperative, and their association with weight outcomes. This cross-sectional study assessed a group of patients before surgery (n=61), and three post-operative groups: 1) 90 patients (27 with laparoscopic adjustable gastric band (LAGB) and 63 with Laparoscopic Roux-en-Y Gastric Bypass (LRYGB)) assessed during their 6month follow-up medical appointment; 2) 96 patients (34 LAGB and 62 LRYGB) assessed during their one year follow-up medical appointment; and 3) 127 patients (62 LAGB and 55 LRYGB) assessed during their second year follow-up medical appointment. Assessment included the Eating Disorders Examination and a set of self-report measures. In the first ten months after surgery fewer participants reported maladaptive eating behaviors. No OBEs were reported at 6months. SBE episodes were present in all groups. P&N was the most frequently reported eating behavior. Eating behavior (P&N) was significantly associated with weight regain, and non-behavioral variables were associated with weight loss. This study is cross-sectional study which greatly limits the interpretation of outcomes and no causal association can be made. However, a subgroup of postoperative patients report eating behaviors that are associated with greater weight regain. The early detection of these eating behaviors might be important in the prevention of problematic outcomes after bariatric surgery. Copyright © 2014 Elsevier Ltd. All rights reserved.
Deuteron Compton scattering below pion photoproduction threshold
NASA Astrophysics Data System (ADS)
Levchuk, M. I.; L'vov, A. I.
2000-07-01
Deuteron Compton scattering below pion photoproduction threshold is considered in the framework of the nonrelativistic diagrammatic approach with the Bonn OBE potential. A complete gauge-invariant set of diagrams is taken into account which includes resonance diagrams without and with NN-rescattering and diagrams with one- and two-body seagulls. The seagull operators are analyzed in detail, and their relations with free- and bound-nucleon polarizabilities are discussed. It is found that both dipole and higher-order polarizabilities of the nucleon are required for a quantitative description of recent experimental data. An estimate of the isospin-averaged dipole electromagnetic polarizabilities of the nucleon and the polarizabilities of the neutron is obtained from the data.
Continental Scientific Drilling Program Data Base
NASA Astrophysics Data System (ADS)
Pawloski, Gayle
The Continental Scientific Drilling Program (CSDP) data base at Lawrence Livermore National Laboratory is a central repository, cataloguing information from United States drill holes. Most holes have been drilled or proposed by various federal agencies. Some holes have been commercially funded. This data base is funded by the Office of Basic Energy Sciences of t he Department of Energy (OBES/DOE) to serve the entire scientific community. Through the unrestricted use of the database, it is possible to reduce drilling costs and maximize the scientific value of current and planned efforts of federal agencies and industry by offering the opportunity for add-on experiments and supplementing knowledge with additional information from existing drill holes.
NASA Astrophysics Data System (ADS)
2007-08-01
The Queen's Birthday Honours list announced on 16 June contained some familiar names from astronomy. Prof. Mark Bailey (1) of Armagh Observatory, currently a Vice-President of the RAS, was awarded an MBE and Dr Heather Couper (2), former President of the British Astronomical Association, a CBE. Prof. Nigel Mason (3) of the Open University and inaugural Director of the Milton Keynes Science Festival received an OBE. Prof. Jocelyn Bell-Burnell (4), President of the RAS from 2002-2004, was awarded a DBE - and an Honorary Doctorate from Harvard University. In addition, Prof. Lord Rees (5), Astronomer Royal, president of the Royal Society and President of the RAS from 1992-1994, was appointed to the Order of Merit.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
Two-dimensional over-all neutronics analysis of the ITER device
NASA Astrophysics Data System (ADS)
Zimin, S.; Takatsu, Hideyuki; Mori, Seiji; Seki, Yasushi; Satoh, Satoshi; Tada, Eisuke; Maki, Koichi
1993-07-01
The present work attempts to carry out a comprehensive neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) developed during the Conceptual Design Activities (CDA). The two-dimensional cylindrical over-all calculational models of ITER CDA device including the first wall, blanket, shield, vacuum vessel, magnets, cryostat and support structures were developed for this purpose with a help of the DOGII code. Two dimensional DOT 3.5 code with the FUSION-40 nuclear data library was employed for transport calculations of neutron and gamma ray fluxes, tritium breeding ratio (TBR), and nuclear heating in reactor components. The induced activity calculational code CINAC was employed for the calculations of exposure dose rate after reactor shutdown around the ITER CDA device. The two-dimensional over-all calculational model includes the design specifics such as the pebble bed Li2O/Be layered blanket, the thin double wall vacuum vessel, the concrete cryostat integrated with the over-all ITER design, the top maintenance shield plug, the additional ring biological shield placed under the top cryostat lid around the above-mentioned top maintenance shield plug etc. All the above-mentioned design specifics were included in the employed calculational models. Some alternative design options, such as the water-rich shielding blanket instead of lithium-bearing one, the additional biological shield plug at the top zone between the poloidal field (PF) coil No. 5, and the maintenance shield plug, were calculated as well. Much efforts have been focused on analyses of obtained results. These analyses aimed to obtain necessary recommendations on improving the ITER CDA design.
[Epilepsy and psychic seizures].
Fukao, Kenjiro
2006-01-01
Various psychic symptoms as ictal manifestation have been found in epileptic patients. They are classified as psychic seizures within simple partial seizures, and subclassified into affective, cognitive, dysmnesic seizures and so on, although the subclassification is not yet satisfactory and almost nothing is known about their relationships with normal brain functions. In this presentation, the speaker picked ictal fear, déjà vu and out-of-body experience (OBE) from them and suggested that studies on these symptoms could uniquely contribute to the progress of cognitive neuroscience, presenting some results from the research and case study that he had been engaged in. Psychic seizures are prone to be missed or misdiagnosed unless psychiatrists with sufficient knowledge and experience on epilepsy care would not treat them, because they are subjective symptoms that are diverse and subtle, while they have some characteristics as ictal symptoms.
Changing times, similar challenges.
Baillie, Jonathan
2013-11-01
With IHEEM celebrating its 70th Anniversary this month, HEJ editor, Jonathan Baillie, recently met the Institute's oldest surviving Past-President, Lawrence Turner OBE, who, having in 1964 established a small engineering business producing some of the NHS's earliest nurse call systems from the basement of his three-storey West Midlands home, has since seen the company, Static Systems Group, grow to become one of the U.K. market-leaders in its field. The Institute's President from 1979-1981, he looked back, during a fascinating two-hour discussion, at his time in the role, talked through some of the key technological and other changes he has seen in the past five decades, reflected on an interesting and varied career, and considered some of the very different current-day challenges that today's IHEEM President, and the Institute as a whole, face.
Geoffrey Layton Slack OBE (Mil), CBE, TD, BDS DDS, FDSRCS, FDS Glas, FFDRCSI, Dip Bact (1912-1991).
Gelbier, Stanley
2014-02-01
It is with some pride that the author worked in Geoffrey Slack's department from 1963 to 1967 and even retained a working relationship with him after that time. Slack was Professor of Dental Surgery (1959-1976) and later Professor of Community Dental Health (1976-1977) at The London Hospital Medical College, within the University of London. The change in titles came about as a result of recognition of his contribution to developments in public health and community dental care and services, for many of which he was directly responsible. He was Dental Dean from 1965 until 1969. Upon retirement in 1977 he became Emeritus Professor. In addition, he was Dean of the Faculty of Dental Surgery at the Royal College of Surgeons of England from 1974 to 1977.
Indirect double photoionization of water
NASA Astrophysics Data System (ADS)
Resccigno, T. N.; Sann, H.; Orel, A. E.; Dörner, R.
2011-05-01
The vertical double ionization thresholds of small molecules generally lie above the dissociation limits corresponding to formation of two singly charged fragments. This gives the possibility of populating singly charged molecular ions by photoionization in the Franck-Condon region at energies below the lowest dication state, but above the dissociation limit into two singly charged fragment ions. This process can produce a superexcited neutral fragment that autoionizes at large internuclear separation. We study this process in water, where absorption of a photon produces an inner-shell excited state of H2O+ that fragments to H++OH*. The angular distribution of secondary electrons produced by OH* when it autoionizes produces a characteristic asymmetric pattern that reveals the distance, and therefore the time, at which the decay takes place. LBNL, Berkeley, CA, J. W. Goethe Universität, Frankfurt, Germany. Work performed under auspices of US DOE and supported by OBES, Div. of Chemical Sciences.
Three cases of near death experience: Is it physiology, physics or philosophy?
Purkayastha, Moushumi; Mukherjee, Kanchan Kumar
2012-07-01
Near-Death experience (NDE) following a severe head injury, critical illness, coma, and suicidal attempt has been reported. Purpose of study was to examine why a few patients report NDE after survival, do cultural and socio-demographic factors may play a role? The details of 3 cases of patients who reported near-death experience (NDE), is presented here. Several theories regarding the reasons, of the various components of the experiences, are discussed with a brief review of literature. All the three patients report the out of body experience OBE. All the three patients reported to remember initially the events that took place during this time, but after some time all three patients could not recall exactly the events that had happened. Whether these are only hallucinations or a proof of 'after life' will remain debatable until more data is communicated.
NASA Astrophysics Data System (ADS)
Wells, Aaron Raymond
This research focuses on the Emory and Obed Watersheds in the Cumberland Plateau in Central Tennessee and the Lower Hatchie River Watershed in West Tennessee. A framework based on market and nonmarket valuation techniques was used to empirically estimate economic values for environmental amenities and negative externalities in these areas. The specific techniques employed include a variation of hedonic pricing and discrete choice conjoint analysis (i.e., choice modeling), in addition to geographic information systems (GIS) and remote sensing. Microeconomic models of agent behavior, including random utility theory and profit maximization, provide the principal theoretical foundation linking valuation techniques and econometric models. The generalized method of moments estimator for a first-order spatial autoregressive function and mixed logit models are the principal econometric methods applied within the framework. The dissertation is subdivided into three separate chapters written in a manuscript format. The first chapter provides the necessary theoretical and mathematical conditions that must be satisfied in order for a forest amenity enhancement program to be implemented. These conditions include utility, value, and profit maximization. The second chapter evaluates the effect of forest land cover and information about future land use change on respondent preferences and willingness to pay for alternative hypothetical forest amenity enhancement options. Land use change information and the amount of forest land cover significantly influenced respondent preferences, choices, and stated willingness to pay. Hicksian welfare estimates for proposed enhancement options ranged from 57.42 to 25.53, depending on the policy specification, information level, and econometric model. The third chapter presents economic values for negative externalities associated with channelization that affect the productivity and overall market value of forested wetlands. Results of robust, generalized moments estimation of a double logarithmic first-order spatial autoregressive error model (inverse distance weights with spatial dependence up to 1500m) indicate that the implicit cost of damages to forested wetlands caused by channelization equaled -$5,438 ha-1. Collectively, the results of this dissertation provide economic measures of the damages to and benefits of environmental assets, help private landowners and policy makers identify the amenity attributes preferred by the public, and improve the management of natural resources.
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.
1995-01-01
This report is a compilation of PID (Proportional Integral Derivative) results for both longitudinal and lateral directional analysis that was completed during Fall 1994. It had earlier established that the maneuvers available for PID containing independent control surface inputs from OBES were not well suited for extracting the cross-coupling static (i.e., C(sub N beta)) or dynamic (i.e., C(sub Npf)) derivatives. This was due to the fact that these maneuvers were designed with the goal of minimizing any lateral directional motion during longitudinal maneuvers and vice-versa. This allows for greater simplification in the aerodynamic model as far as coupling between longitudinal and lateral directions is concerned. As a result, efforts were made to reanalyze this data and extract static and dynamic derivatives for the F/A-18 HARV (High Angle of Attack Research Vehicle) without the inclusion of the cross-coupling terms such that more accurate estimates of classical model terms could be acquired. Four longitudinal flights containing static PID maneuvers were examined. The classical state equations already available in pEst for alphadot, qdot and thetadot were used. Three lateral directional flights of PID static maneuvers were also examined. The classical state equations already available in pEst for betadot, p dot, rdot and phi dot were used. Enclosed with this document are the full set of longitudinal and lateral directional parameter estimate plots showing coefficient estimates along with Cramer-Rao bounds. In addition, a representative time history match for each type of meneuver tested at each angle of attack is also enclosed.
Aymerich-Franch, Laura; Petit, Damien; Ganesh, Gowrishankar; Kheddar, Abderrahmane
2016-11-01
Whole-body embodiment studies have shown that synchronized multi-sensory cues can trick a healthy human mind to perceive self-location outside the bodily borders, producing an illusion that resembles an out-of-body experience (OBE). But can a healthy mind also perceive the sense of self in more than one body at the same time? To answer this question, we created a novel artificial reduplication of one's body using a humanoid robot embodiment system. We first enabled individuals to embody the humanoid robot by providing them with audio-visual feedback and control of the robot head movements and walk, and then explored the self-location and self-identification perceived by them when they observed themselves through the embodied robot. Our results reveal that, when individuals are exposed to the humanoid body reduplication, they experience an illusion that strongly resembles heautoscopy, suggesting that a healthy human mind is able to bi-locate in two different bodies simultaneously. Copyright © 2016 Elsevier Inc. All rights reserved.
Our unacknowledged ancestors: dream theorists of antiquity, the middle ages, and the renaissance.
Rupprecht, C S
1990-06-01
Exploring the dream world from a modern, or post-modern, perspective, especially through the lens of contemporary technologies, often leads us as researchers to see ourselves as engaged in a new and revolutionary discourse. In fact, this self-image is a profoundly ahistorical one, because it ignores the contributions of ancient, medieval and Renaissance oneirologists who wrote extensively, albeit in different terms and images of lucidity, prerecognition, day residue, wish fulfillment, incubation, problem solving, REM, obe, and the collective unconscious. There are also analogues in these early accounts to anxiety, recurrent, mirror, telepathic, shared, flying, and death dreams. Dream interpretation through music, analysis of dream as narrative, sophisticated theories about memory and language and symbolization are all part of the tradition. Further, early texts pose many issues in sleep and dream research which are not currently being pursued. We dream workers of the late twentieth century should therefore fortify ourselves with knowledge of the oneiric past as one important way to enhance our dream work in the twenty-first century.
Summaries of FY 1996 geosciences research
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
The Geosciences Research Program is directed by the Department of Energy`s (DOE`s) Office of Energy Research (OER) through its Office of Basic Energy Sciences (OBES). Activities in the Geosciences Research Program are directed toward building the long-term fundamental knowledge base necessary to provide for energy technologies of the future. Future energy technologies and their individual roles in satisfying the nations energy needs cannot be easily predicted. It is clear, however, that these future energy technologies will involve consumption of energy and mineral resources and generation of technological wastes. The earth is a source for energy and mineral resources and ismore » also the host for wastes generated by technological enterprise. Viable energy technologies for the future must contribute to a national energy enterprise that is efficient, economical, and environmentally sound. The Geosciences Research Program emphasizes research leading to fundamental knowledge of the processes that transport, modify, concentrate, and emplace (1) the energy and mineral resources of the earth and (2) the energy by-products of man.« less
NASA Astrophysics Data System (ADS)
Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit
2014-05-01
The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others), informations on location, landslide types and causes, geomorphological positions, geometries, hazards and damages, as well as assessments related to the activity of landslides. Furthermore, there are stored spatial objects, which represent the components of a landslide, in particular the scarps and the accumulation areas. Besides, waterways, map sheets, contour lines, detailed infrastructure data, digital elevation models, aspect and slope data are included. Examples of spatial queries to the database are intersections of raster and vector data for calculating values for slope gradients or aspects of landslide areas and for creating multiple, overlaying sections for the comparison of slopes, as well as distances to the infrastructure or to the next receiving drainage. Furthermore, getting informations on landslide magnitudes, distribution and clustering, as well as potential correlations concerning geomorphological or geological conditions. The data management concept in this study can be implemented for any academic, public or private use, because it is independent from any obligatory licenses. The created spatial database offers a platform for interdisciplinary research and socio-economic questions, as well as for landslide susceptibility and hazard indication mapping. Obe, R.O., Hsu, L.S. 2011. PostGIS in action. - pp 492, Manning Publications, Stamford
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, G.
1995-10-30
The objective of the workshop was to promote discussions between experts and research managers on developing approaches for assessing the impact of DOE`s basic energy research upon the energy mission, applied research, technology transfer, the economy, and society. The purpose of this impact assessment is to demonstrate results and improve ER research programs in this era when basic research is expected to meet changing national economic and social goals. The questions addressed were: (1) By what criteria and metrics does Energy Research measure performance and evaluate its impact on the DOE mission and society while maintaining an environment that fostersmore » basic research? (2) What combination of evaluation methods best applies to assessing the performance and impact of OBES basic research? The focus will be upon the following methods: Case studies, User surveys, Citation analysis, TRACES approach, Return on DOE investment (ROI)/Econometrics, and Expert panels. (3) What combination of methods and specific rules of thumb can be applied to capture impacts along the spectrum from basic research to products and societal impacts?« less
Heritability of the somatotype components in Biscay families.
Rebato, E; Jelenkovic, A; Salces, I
2007-01-01
The anthropometric somatotype is a quantitative description of body shape and composition. Familial studies indicate the existence of a familial resemblance for this phenotype and they suggest a substantial action by genetic factors on this aggregation. The aim of this study is to examine the degree of familial resemblance of the somatotype components and of a factor of shape, in a sample of Biscay nuclear families (Basque Country, Spain). One thousand three hundred and thirty nuclear families were analysed. The anthropometric somatotype components [Carter, J.E.L., Heath, B.H., 1990. Somatotyping. Development and applications. Cambridge University Press, Cambridge, p. 503] were computed. Each component was fitted for the other two through a stepwise multiple regression, and also fitted through the LMS method [Cole, T., 1988. Fitting smoothed centile curves to reference data. J. Roy. Stat. Soc. 151, 385-418] in order to eliminate the age, sex and generation effects. The three raw components were introduced in a PCA from which a shape factor (PC1) was extracted for each generation. The correlations analysis was performed with the SEGPATH package [Province, M.A., Rao, D.C., 1995. General purpose model and computer programme for combined segregation and path analysis (SEGPATH): automatically creating computer from symbolic language model specifications. Genet. Epidemiol. 12, 203-219]. A general model of transmission and nine reduced models were tested. Maximal heritability was estimated with the formula of [Rice, T., Warwick, D.E., Gagnon, J., Bouchard, C., Leon, A.S., Skinner, J.S., Wilmore, J.H., Rao, D.C., 1997. Familial resemblance for body composition measures: the HERITAGE family study. Obes. Res. 5, 557-562]. The correlations were higher between offspring than in parents and offspring and a significant resemblance between mating partners existed. Maximum heritabilities were 55%, 52% and 46% for endomorphy, mesomorphy and ectomorphy, respectively, and 52% for PC1. In conclusion, the somatotype presents a moderate degree of familial aggregation. For the somatotype components, as well as for PC1, the degree of familial resemblance depends on age. The sex only has a significant effect on ectomorphy.
Libman, Kimberly; O’Keefe, Eileen
2010-01-01
As rates of childhood obesity and overweight rise around the world, researchers and policy makers seek new ways to reverse these trends. Given the concentration of the world’s population, income inequalities, unhealthy diets, and patterns of physical activity in cities, urban areas bear a disproportionate burden of obesity. To address these issues, in 2008, researchers from the City University of New York and London Metropolitan University created the Municipal Responses to Childhood Obesity Collaborative. The Collaborative examined three questions: What role has city government played in responding to childhood obesity in each jurisdiction? How have municipal governance structures in each city influenced its capacity to respond effectively? How can policy and programmatic interventions to reduce childhood obesity also reduce the growing socioeconomic and racial/ethnic inequities in its prevalence? Based on a review of existing initiatives in London and New York City, the Collaborative recommended 11 broad strategies by which each city could reduce childhood obesity. These recommendations were selected because they can be enacted at the municipal level; will reduce socioeconomic and racial/ethnic inequalities in obesity; are either well supported by research or are already being implemented in one city, demonstrating their feasibility; build on existing city assets; and are both green and healthy. PMID:20811951
Low temperature specific heat of frustrated antiferromagnet HoInCu4
NASA Astrophysics Data System (ADS)
Weickert, Franziska; Fritsch, Veronika; Bambaugh, Ryan; Sarrao, John; Thompson, Joe D.; Movshovich, Roman
2014-03-01
We present low temperature specific heat measurements of single crystal HoInCu4, down to 35 mK and in magnetic field up to 12 Tesla. Ho atoms are arranged in an FCC lattice of the edge-sharing tetrahedra, and undergo an antiferromagnetic ordering at TN = 0.76 K, with the frustration parameter f = -ΘCW /TN of 14.3. Magnetic AF order is suppressed in field H0 ~ 4 T. The low temperature Schottky anomaly due to Ho evolves smoothly as a function of field through H0 and TN. The peak value of the anomaly remains roughly constant from 0 T to 12 T. The temperature of the anomaly's peak remains constant at TSch ~ 170 mK for H
Walter Laing Macdonald Perry KT OBE, Barron Perry of Walton, 21 June 1921 - 17 July 2003.
Kelly, John S; Horlock, John H
2004-01-01
Lord Perry of Walton died suddenly on 17 July 2003, at the age of 82 years. Walter Laing Macdonald Perry was a native of Dundee, educated at Morgan Academy Dundee, Ayr Academy, Dundee High School and St Andrews University (MB ChB, MD and DSc), winning the Rutherford Silver Medal for his MD thesis and the Sykes Gold Medal for his DSc thesis. After Casaulty Officer and House Surgeon posts in 1943-44, he served as a Medical Officer in the Colonial Medical Service in Nigeria in 1944-46, then briefly as a Medical Officer in the RAF, 1946-47, before embarking on a scientific career on the staff of the Medical Research COuncil at the National Institute for Medical Research from 1947 to 1958, serving as Director of the Department of Biological Standards from 1952 to 1958. Professionally, he achieved MRCP (ED) in 1963 and was elected FRCPE in 1967, FRCP in 1978, FRSE in 1960 and FRS in 1985. In 1958 he came to Edinburgh as Professor of Pharmacology, holding the Chair from 1958 to 1968. During this time he also served as Dean of the Faculty of Medicine (1965-67) and Vice-Principal of the University (1967-68) before leaving to become the inaugural Vice-Chancellor of the Open University in 1968, a post he held until 1980. During this period at the Open University he developed a second distinguish career as a university administrator and a promotor and facilitator of open and distance learning, in which fields he later performed extensive work on behalf of the United Nations. A third career, in politics and public life, began with his ennoblement to a life peerage in 1979, taking the title of Walton in the County of Buckinghamshire, the initial base of the Open University. Latterly Walter sat as a Liberal Democrat, having twice been Social Democratic Party deputy leader in the Lords in the 1980s. He took an active role in the Lords' Select Committee on Science and Technology and held interests in and spoke on many areas of public policy, including fisheries policy. Recognition of his distinguished careers came with a succession of honours; OBE in 1957, Knight Bachelor in 1974 and Baron in 1979; 10 honorary degrees from UK, North American, College London; the Wellcome Gold Medal in 1993 and Inaugural Royal Medal of the Royal Society of Edinburgh in 2000. He was Chairman, President or member of numerous commercial, educational, public interest and scientific bodies. Lord Perry's publications included sole or part authorship of approximately 90 books, research papers and abstracts. Shining through of Walter Perry's careers are strengths of commitment and sheer hard work, rigorous analysis of scientific, educational and organizational problems, experimentation and pursuit of clear objectives. Against scepticism, elitism and ill-informed criticism he drove through the establishment of the Open University. It is today respected internationally, is by some orders of magnitude our largest university in terms of student enrollment and is demonstrably successful outcome from an experiment initiated 40 years ago. It represents a fine monument to Walter Perry.
NASA Astrophysics Data System (ADS)
Hussain, I. S.; Azlee Hamid, Fazrena
2017-08-01
Technical skills are one of the attributes, an engineering student must attain by the time of graduation, as per recommended by Engineering Accreditation Council (EAC). This paper describes the development of technical skills, Programme Outcome (PO) number 5, in students taking the Bachelor of Electrical Power Engineering (BEPE) programme in Universiti Tenaga Nasional (UNITEN). Seven courses are identified to address the technical skills development. The course outcomes (CO) of the courses are designed to instill the relevant technical skills with suitable laboratory activities. Formative and summative assessments are carried out to gauge students’ acquisition of the skills. Finally, to measure the attainment of the technical skills, key course concept is used. The concept has been implemented since 2013, focusing on improvement of the programme instead of the cohort. From the PO attainment analysis method, three different levels of PO attainment can be calculated: from the programme level, down to the course and student levels. In this paper, the attainment of the courses mapped to PO5 is measured. It is shown that Power Electronics course, which is the key course for PO5, has a strong attainment at above 90%. PO5 of other six courses are also achieved. As a conclusion, by embracing outcome-based education (OBE), the BEPE programme has a sound method to develop technical psychomotor skills in the degree students.
Near-death experiences in cardiac arrest survivors.
French, Christopher C
2005-01-01
Near-death experiences (NDEs) have become the focus of much interest in the last 30 years or so. Such experiences can occur both when individuals are objectively near to death and also when they simply believe themselves to be. The experience typically involves a number of different components including a feeling of peace and well-being, out-of-body experiences (OBEs), entering a region of darkness, seeing a brilliant light, and entering another realm. NDEs are known to have long-lasting transformational effects upon those who experience them. An overview is presented of the various theoretical approaches that have been adopted in attempts to account for the NDE. Spiritual theories assume that consciousness can become detached from the neural substrate of the brain and that the NDE may provide a glimpse of an afterlife. Psychological theories include the proposal that the NDE is a dissociative defense mechanism that occurs in times of extreme danger or, less plausibly, that the NDE reflects memories of being born. Finally, a wide range of organic theories of the NDE has been put forward including those based upon cerebral hypoxia, anoxia, and hypercarbia; endorphins and other neurotransmitters; and abnormal activity in the temporal lobes. Finally, the results of studies of NDEs in cardiac arrest survivors are reviewed and the implications of these results for our understanding of mind-brain relationships are discussed.
Laser Induced Fluorescence Spectroscopy of Jet-Cooled CaOCa
NASA Astrophysics Data System (ADS)
Sullivan, Michael N.; Frohman, Daniel J.; Heaven, Michael; Fawzy, Wafaa M.
2016-06-01
The group IIA metals have stable hypermetallic oxides of the general form MOM. Theoretical interest in these species is associated with the multi-reference character of the ground states. It is now established that the ground states can be formally assigned to the M+O^{2-M+} configuration, which leaves two electrons in orbitals that are primarily metal-centered ns orbitals. Hence the MOM species are diradicals with very small energy spacings between the lowest energy singlet and triplet states. Previously, we have characterized the lowest energy singlet transition (1Σ^{+u← X1Σ+g}) of BeOBe. In this study we obtained the first electronic spectrum of CaOCa. Jet-cooled laser induced fluorescence spectra were recorded for multiple bands that occured within the 14,800 - 15,900 cm-1 region. Most of the bands exhibited simple P/R branch rotational line patterns that were blue-shaded. Only even rotational levels were observed, consistent with the expected X 1Σ^{+g} symmetry of the ground state (40Ca has zero nuclear spin). A progression of excited bending modes was evident in the spectrum, indicating that the transition is to an upper state that has a bent equilibrium geometry. Molecular constants were extracted from the rovibronic bands using PGOPHER. The experimental results and interpretation of the spectrum, which was guided by the predictions of electronic structure calculation, will be presented.
Laser Induced Fluorescence Spectroscopy of Jet-Cooled MgOMg
NASA Astrophysics Data System (ADS)
Sullivan, Michael N.; Frohman, Daniel J.; Heaven, Michael; Fawzy, Wafaa M.
2017-06-01
The group IIA metals have stable hypermetallic oxides of the general form MOM. Theoretical interest in these species is associated with the multi-reference character of the ground states. It is now established that the ground states can be formally assigned to the M^{+O^{2-}M^{+}} configuration, which leaves two electrons in orbitals that are primarily metal-centered ns orbitals. Hence the MOM species are diradicals with very small energy spacings between the lowest energy singlet and triplet states. Previously, we have characterized the lowest energy singlet transition (^{1Σ^{+}_{u}← ^{1}Σ^{+}_{g}}) of BeOBe. Preliminary data for the first electronic transition of the isovalent species, CaOCa, was presented previously (71^{st} ISMS, talk RI10). We now report the first electronic spectrum of MgOMg. Jet-cooled laser induced fluorescence spectra were recorded for multiple bands that occurred within the 21,000 - 24,000 cm^{-1} range. Most of the bands exhibited simple P/R branch rotational line patterns that were blue-shaded. Only even rotational levels were observed, consistent with the expected X ^{1Σ^{+}_{g}} symmetry of the ground state (^{24Mg} has zero nuclear spin). Molecular constants were extracted from the rovibronic bands using PGOPHER. The experimental results and interpretation of the spectrum, which was guided by the predictions of electronic structure calculation, will be presented.
Involvement of Fathers in Pediatric Obesity Treatment and Prevention Trials: A Systematic Review.
Morgan, Philip J; Young, Myles D; Lloyd, Adam B; Wang, Monica L; Eather, Narelle; Miller, Andrew; Murtagh, Elaine M; Barnes, Alyce T; Pagoto, Sherry L
2017-02-01
Despite their important influence on child health, it is assumed that fathers are less likely than mothers to participate in pediatric obesity treatment and prevention research. This review investigated the involvement of fathers in obesity treatment and prevention programs targeting children and adolescents (0-18 years). A systematic review of English, peer-reviewed articles across 7 databases. Retrieved records included at least 1 search term from 2 groups: "participants" (eg, child*, parent*) and "outcomes": (eg, obes*, diet*). Randomized controlled trials (RCTs) assessing behavioral interventions to prevent or treat obesity in pediatric samples were eligible. Parents must have "actively participated" in the study. Two authors independently extracted data using a predefined template. The search retrieved 213 eligible RCTs. Of the RCTs that limited participation to 1 parent only (n = 80), fathers represented only 6% of parents. In RCTs in which participation was open to both parents (n = 133), 92% did not report objective data on father involvement. No study characteristics moderated the level of father involvement, with fathers underrepresented across all study types. Only 4 studies (2%) suggested that a lack of fathers was a possible limitation. Two studies (1%) reported explicit attempts to increase father involvement. The review was limited to RCTs published in English peer-reviewed journals over a 10-year period. Existing pediatric obesity treatment or prevention programs with parent involvement have not engaged fathers. Innovative strategies are needed to make participation more accessible and engaging for fathers. Copyright © 2017 by the American Academy of Pediatrics.
Beyond body experiences: phantom limbs, pain and the locus of sensation.
Wade, Nicholas J
2009-02-01
Reports of perceptual experiences are found throughout history. However, the phenomena considered worthy of note have not been those that nurture our survival (the veridical features of perception) but the oddities or departures from the common and commonplace accuracies of perception. Some oddities (like afterimages) could be experienced by everyone, whereas others were idiosyncratic. Such phenomena were often given a paranormal interpretation before they were absorbed into the normal science of the day. This sequence is examined historically in the context of beyond body experiences or phantom limbs. The experience of sensations in lost body parts provides an example of the ways in which novel phenomena can be interpreted. The first phase of description probably occurred in medieval texts and was often associated with accounts of miraculous reconnection. Ambroise Paré (1510-1590) initiated medical interest in this intriguing aspect of perception, partly because more of his patients survived the trauma of surgery. Description is followed by attempts to incorporate the phenomenon into the body of extant theory. René Descartes (1596-1650) integrated sensations in amputated limbs into his dualist theory of mind, and used the phenomenon to support the unity of the mind in comparison to the fragmented nature of bodily sensations. Others, like William Porterfield (ca. 1696-1771), did not consider the phenomenon as illusory and interpreted it in terms of other projective features of perception. Finally, the phenomenon is accepted and utilized to gain more insights into the functioning of the senses and the brain. The principal features of phantom limbs were well known before they were given that name in the 19th century. Despite the puzzles they still pose, these phantoms continue to provide perception with some potent concepts: the association with theories of pain has loosened the link with peripheral stimulation and emphasis on the phenomenal dimension has slackened the grip of stimulus-based theories of perception. The pattern of development in theories of phantom limbs might provide a model for examining out-of-body experiences (OBEs).
Molecular mechanisms of hydrogen loaded B-hydroquinone clathrate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daschbach, John L.; Chang, Tsun-Mei; Corrales, Louis R.
2006-09-07
Molecular dynamics simulations are used to investigate the molecular interactions of hydrogen loaded beta-hydroquinone clathrate. It is found that at lower temperatures, higher loadings are more stable, whereas, at higher temperatures, lower loadings are more stable. This trend can be understood based on the interactions in the system. For loadings greater than one, the repulsive forces between the guest molecules shove each other towards the attractive forces between the guest and host molecules leading to a stabilized minimum energy configuration at low temperatures. At higher temperatures greater displacements take the system away from the shallow energy minimum and the trendmore » reverses. The asymmetries of the clathrate cage structure are due to the presence of the attractive forces at loadings greater than one that lead to confined states. The nature of the cavity structure is nearly spherical for a loading of one, leads to preferential occupation near the hydroxyl ring crowns of the cavity with a loading of two, and at higher loadings, leads to occupation of the interstitial sites (the hydroxyl rings) between cages by a single H2 molecule with the remaining molecules occupying the equatorial plane of the cavity. At higher temperatures, the cavity is more uniformly occupied for all loadings, where the occupation of the interstitial positions of the cavities leads to facile diffusion. ACKNOWLEDGEMENT This work was partially supported by NIDO (Japan), LDRD (PNNL), EERE U.S. Department of Energy, and by OBES, U.S. DOE. The Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy« less
Food addiction prevalence and concurrent validity in African American adolescents with obesity.
Schulte, Erica M; Jacques-Tiura, Angela J; Gearhardt, Ashley N; Naar, Sylvie
2018-03-01
Food addiction, measured by the Yale Food Addiction Scale (YFAS), has been associated with obesity, eating-related problems (e.g., bingeing), and problematic consumption of highly processed foods. Studies on this topic have primarily examined adult samples with an overrepresentation of White individuals, and little is known about addictive-like eating in adolescents, particularly African American adolescents who exhibit high rates of obesity and eating pathology. The current study examined the prevalence of food addiction and its convergent validity with percent overweight, eating-related problems, and self-reported dietary intake in a sample of 181 African American adolescents with obesity. Approximately 10% of participants met for food addiction, measured by the YFAS for children (YFAS-C). YFAS-C scores were most strongly associated with objective binge episodes (OBE), though significant relationships were also observed with objective overeating episodes (OOE), percent overweight relative to age- and sex-adjusted body mass index (BMI), and, more modestly, subjective binge episodes (SBE). YFAS-C scores were also related to greater consumption of all nutrient characteristics of interest (calories, fat, saturated fat, trans fat, carbohydrates, sugar, added sugar), though most strongly with trans fat, a type of fat found most frequently in highly processed foods. These findings suggest that the combination of exhibiting a loss of control while consuming an objectively large amount of food seems to be most implicated in food addiction for African American adolescents with obesity. The present work also provides evidence that individuals with food addiction may consume elevated quantities of highly processed foods, relative to those without addictive-like eating. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Hugh Alistair Reid OBE MD: investigation and treatment of snake bite.
Hawgood, B J
1998-03-01
Alistair Reid was an outstanding clinician, epidemiologist and scientist. At the Penang General Hospital, Malaya, his careful observation of sea snake poisoning revealed that sea snake venoms were myotoxic in man leading to generalized rhabdomyolysis, and were not neurotoxic as observed in animals. In 1961, Reid founded and became the first Honorary Director of the Penang Institute of Snake and Venom Research. Effective treatment of sea snake poisoning required specific antivenom which was produced at the Commonwealth Serum Laboratories in Melbourne from Enhydrina schistosa venom supplied by the Institute. From the low frequency of envenoming following bites, Reid concluded that snakes on the defensive when biting man seldom injected much venom. He provided clinical guidelines to assess the degree of envenoming, and the correct dose of specific antivenom to be used in the treatment of snake bite in Malaya. Reid demonstrated that the non-clotting blood of patients bitten by the pit viper, Calloselasma rhodostoma [Ancistrodon rhodostoma] was due to venom-induced defibrination. From his clinical experience of these patients, Reid suggested that a defibrinating derivative of C. rhodostoma venom might have a useful role in the treatment of deep vein thrombosis. This led to Arvin (ancrod) being used clinically from 1968. After leaving Malaya in 1964, Alistair Reid joined the staff of the Liverpool School of Tropical Medicine, as Senior Lecturer. Enzyme-linked immunosorbent assay (ELISA) for detecting and quantifying snake venom and venom-antibody was developed at the Liverpool Venom Research Unit: this proved useful in the diagnosis of snake bite, in epidemiological studies of envenoming patterns, and in screening of antivenom potency. In 1977, Dr H. Alistair Reid became Head of the WHO Collaborative Centre for the Control of Antivenoms based at Liverpool.
NASA Astrophysics Data System (ADS)
Lavielle, B.; Nishiizumi, K.; Marti, K.; Jeannot, J.-P.; Caffee, M. W.; Finkel, R. C.
1995-09-01
We report measurements of 1OBe7 26AI, 36CI, and of light noble gases in 6 samples of the type IIB Old Woman iron meteorite. The aim of this work is to study the depth dependence of the production rates of cosmogenic nuclides in iron meteorites. Old Woman is a large single mass of 2753 kg. Five samples have been taken from a slice of about 100 cm x 50 cm. One other sample was located roughly 40 cm above the center of the slice in a perpendicular direction. The distances between any two samples vary from 36.5 cm to 57.5 cm. Studies of cosmogenic nuclides in samples of known locations are very useful for the validation of models describing the production of cosmogenic nuclides in meteorites. Cosmogenic radionuclides were measured by accelerator mass spectrometry at Lawrence Livermore National Laboratory. Partial results have been reported earlier [1]. Concentrations of 4He, 21Ne and 38Ar in aliquots of the samples were determined by conventional mass spectrometry using an isotopic dilution method. The ratio 3He/4He appears to be almost constant with a value of 0.12 - ().13. This is about half the value generally observed in iron meteorites. Similar low ratios have been previously observed in some irons and in chondritic metal and reflect diffusion losses of 3H 12,31. The ratios 4He/38Ar, 4He/21Ne and 36Ar/38Ar are similar to those observed in iron meteorites indicating no significant losses of 4He. The measured ratio S = 4He/21Ne which represents one of the best indicators of shielding depth in iron meteorites, varies from 310 to 375 in samples from the slice. By using this as a shielding parameter, profiles were obtained for the different nuclides investigated in this work. Systematic decreases from the surface to the center of the meteorite are observed and the center of the meteoroid can be determined. As expected from nuclear systematics, the ratio 36Cl/36Ar is almost constant. The ratio 36Cl/10Be is relatively constant with a mean value of 4.7 indicating that the terrestrial age of Old Woman is probably less than 50,000 years. References: [1] Nishiizumi K. et al (1991) Meteoritics, 26, 379-380. [2] Schultz L. (1967) EPSL, 2, 87-89. [3] Graf T. et al., this volume.
NASA Astrophysics Data System (ADS)
Hamid, Nasri A.; Mujaini, Madihah; Mohamed, Abdul Aziz
2017-01-01
The Center for Nuclear Energy (CNE), College of Engineering, Universiti Tenaga Nasional (UNITEN) has a great responsibility to undertake educational activities that promote developing human capital in the area of nuclear engineering and technology. Developing human capital in nuclear through education programs is necessary to support the implementation of nuclear power projects in Malaysia in the near future. In addition, the educational program must also meet the nuclear power industry needs and requirements. In developing a certain curriculum, the contents must comply with the university's Outcomes Based Education (OBE) philosophy. One of the important courses in the nuclear curriculum is in the area of nuclear security. Basically the nuclear security course covers the current issues of law, politics, military strategy, and technology with regard to weapons of mass destruction and related topics in international security, and review legal regulations and political relationship that determine the state of nuclear security at the moment. In addition, the course looks into all aspects of the nuclear safeguards, builds basic knowledge and understanding of nuclear non-proliferation, nuclear forensics and nuclear safeguards in general. The course also discusses tools used to combat nuclear proliferation such as treaties, institutions, multilateral arrangements and technology controls. In this paper, we elaborate the development of undergraduate nuclear security course at the College of Engineering, Universiti Tenaga Nasional. Since the course is categorized as mechanical engineering subject, it must be developed in tandem with the program educational objectives (PEO) of the Bachelor of Mechanical Engineering program. The course outcomes (CO) and transferrable skills are also identified. Furthermore, in aligning the CO with program outcomes (PO), the PO elements need to be emphasized through the CO-PO mapping. As such, all assessments and distribution of Bloom Taxonomy levels are assigned in accordance with the CO-PO mapping. Finally, the course has to fulfill the International Engineering Alliance (IEA) Graduate Attributes of the Washington Accord.
Risstad, Hilde; Kristinsson, Jon A; Fagerland, Morten W; le Roux, Carel W; Birkeland, Kåre I; Gulseth, Hanne L; Thorsby, Per M; Vincent, Royce P; Engström, My; Olbers, Torsten; Mala, Tom
2017-09-01
Bile acids have been proposed as key mediators of the metabolic effects after bariatric surgery. Currently no reports on bile acid profiles after duodenal switch exist, and long-term data after gastric bypass are lacking. To investigate bile acid profiles up to 5 years after Roux-en-Y gastric bypass and biliopancreatic diversion with duodenal switch and to explore the relationship among bile acids and weight loss, lipid profile, and glucose metabolism. Two Scandinavian University Hospitals. We present data from a randomized clinical trial of 60 patients with body mass index 50-60 kg/m 2 operated with gastric bypass or duodenal switch. Repeated measurements of total and individual bile acids from fasting serum during 5 years after surgery were performed. Mean concentrations of total bile acids increased from 2.3 µmol/L (95% confidence interval [CI], -.1 to 4.7) at baseline to 5.9 µmol/L (3.5-8.3) 5 years after gastric bypass and from 1.0 µmol/L (95% CI, -1.4 to 3.5) to 9.5 µmol/L (95% CI, 7.1-11.9) after duodenal switch; mean between-group difference was -4.8 µmol/L (95% CI, -9.3 to -.3), P = .036. Mean concentrations of primary bile acids increased more after duodenal switch, whereas secondary bile acids increased proportionally across the groups. Higher levels of total bile acids at 5 years were associated with lower body mass index, greater weight loss, and lower total cholesterol. Total bile acid concentrations increased substantially over 5 years after both gastric bypass and duodenal switch, with greater increases in total and primary bile acids after duodenal switch. (Surg Obes Relat Dis 2017;0:000-000.) © 2017 American Society for Metabolic and Bariatric Surgery. All rights reserved. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.
Tokman, Hrisi Bahar; İskeleli, Güzin; Dalar, Zeynep Güngördü; Kangaba, Achille Aime; Demirci, Mehmet; Akay, Hatice K; Borsa, Bariş Ata; Algingil, Reyhan Çalişkan; Kocazeybek, Bekir S; Torun, Müzeyyen Mamal; Kiraz, Nuri
2014-01-01
Anaerobic bacteria play an important role in eye infections; however, there is limited epidemiologic data based on the the role of these bacteria in the etiology of keratitis and endophthalmitis. The aim of this re- search is to determine the prevalence of anaerobic bacteria in perforated corneal ulcers of patients with keratitis and endophthalmitis and to evaluate their antimicrobial susceptibilities. Corneal scrapings were taken by the ophthalmologist using sterile needles. For the isolation of anaerobic bacteria, samples were inoculated on specific media and were incubated under anaerobic conditions obtained with Anaero-Gen (Oxoid & Mitsubishi Gas Company) in anaerobic jars (Oxoid USA, Inc. Columbia, MD, USA). The molecular identification of anaerobic bacteria was performed by multiplex PCR and the susceptibilities of an- aerobic bacteria to penicillin, chloramphenicol, and clindamycin were determined with the E test (bioMerieux). 51 strains of anaerobic bacteria belonging to four different genuses were detected by multiplex PCR and only 46 strains were isolated by culture. All of them were found susceptible to chloramphenicol whereas penicillin resistance was found in 13.3% of P.anaerobius strains, clindamycin resistance was found in 34.8% of P.acnes and 13.3% of P. anaerobius strains. Additionnaly, one strain of P. granulosum was found resistant to clindamycin, one strain of B. fragilis and one strain of P.melaninogenica were found resistant to penicillin and clindamycin. Routine analyses of anaerobes in perforated corneal ulcers is inevitable and usage of appropriate molecular methods, for the detection of bacteria responsible from severe infections which might not be deter- mined by cultivation, may serve for the early decision of the appropriate treatment. Taking into account the in- creasing antimicrobial resistance of anaerobic bacteria, alternative eye specific antibiotics effective against anaer- obes are needed to achieve a successful treatment.
Design requirements for innovative homogeneous reactor, lesson learned from Fukushima accident
NASA Astrophysics Data System (ADS)
Arbie, Bakri; Pinem, Suryan; Sembiring, Tagor; Subki, Iyos
2012-06-01
The Fukushima disaster is the largest nuclear accident since the 1986 Chernobyl disaster, but it is more complex as multiple reactors and spent fuel pools are involved. The severity of the nuclear accident is rated 7 in the International Nuclear Events Scale. Expert said that "Fukushima is the biggest industrial catastrophe in the history of mankind". According to Mitsuru Obe, in The Wall Street Journal, May 16th of 2011, TEPCO estimates the nuclear fuel was exposed to the air less than five hours after the earthquake struck. Fuel rods melted away rapidly as the temperatures inside the core reached 2800 C within six hours. In less than 16 hours, the reactor core melted and dropped to the bottom of the pressure vessel. The information should be evaluated in detail. In Germany several nuclear power plant were shutdown, Italy postponed it's nuclear power program and China reviewed their nuclear power program. Different news come from Britain, in October 11, 2011, the Safety Committee said all clear for nuclear power in Britain, because there are no risk of strong earthquake and tsunami in the region. Due to this severe fact, many nuclear scientists and engineer from all over the world are looking for a new approach, such as homogeneous reactor which was developed in Oak Ridge National Laboratory in 1960-ies, during Dr. Alvin Weinberg tenure as the Director of ORNL. The paper will describe the design requirement that will be used as the basis for innovative homogeneous reactor. Innovative Homogeneous Reactor is expected to reduce core melt by two decades (4), since the fuel is intermix homogeneously with coolant and secondly we eliminate the used fuel rod which need to be cooled for a long period of time. In order to be successful for its implementation of the innovative system, testing and validation, three phases of development will be introduced. The first phase is Low Level Goals is really the proof of concept;the Medium Level Goal is Technical Goalsand the High Level Goals which is Business Goals.
Chouillard, Elie; Younan, Antoine; Alkandari, Mubarak; Daher, Ronald; Dejonghe, Bernard; Alsabah, Salman; Biagini, Jean
2016-10-01
Sleeve gastrectomy (SG) is currently the most commonly performed bariatric procedure in France. It achieves both adequate excess weight loss and significant reduction in comorbidities. However, fistula is still the most common complication after SG, occurring in more than 3 % of cases, even in specialized centers (Gagner and Buchwald in Surg Obes Relat Dis 10:713-723. doi: 10.1016/j.soard.2014.01.016 , 2014). Its management is not standardized, long, and challenging. We have already reported the short-term results of Roux-en-Y fistulo-jejunostomy (RYFJ) as a salvage procedure in patients with post-SG fistula (Chouillard et al. in Surg Endosc 28:1954-1960 doi: 10.1007/s00464-014-3424-y , 2014). In this study, we analyzed the mid-term results of the RYFJ emphasizing its endoscopic, radiologic, and safety outcome. Between January 2007 and December 2013, we treated 75 patients with post-SG fistula, mainly referred from other centers. Immediate management principles included computerized tomography (CT) scan-guided drainage of collections or surgical peritoneal lavage, nutritional support, and endoscopic stenting. Ultimately, this approach achieved fistula control in nearly two-thirds of the patients. In the remaining third, RYFJ was proposed, eventually leading to fistula control in all cases. The mid-term results (i.e., more than 1 year after surgery) were assessed using anamnesis, clinical evaluation, biology tests, upper digestive tract endoscopy, and IV-enhanced CT scan with contrast upper series. Thirty patients (22 women and 8 men) had RYFJ for post-SG fistula. Mean age was 40 years (range 22-59). Procedures were performed laparoscopically in all but 3 cases (90 %). Three patients (10 %) were lost to follow-up. Mean follow-up period was 22 months (18-90). Mean body mass index (BMI) was 27.4 kg/m(2) (22-41). Endoscopic and radiologic assessment revealed no persistent fistula and no residual collections. Despite the lack of long-term follow-up, RYFJ could be a safe and feasible salvage option for the treatment of patients with post-SG fistula, especially those who failed conservative management. Mid-term outcome analysis confirms that fistula control is durable. Weight loss panel is satisfactory.
Queensland 2010-2011: A Summer of Extremes
NASA Astrophysics Data System (ADS)
Maroulis, J.
2012-04-01
"I love a sunburnt country, A land of sweeping plains, Of ragged mountain ranges, Of droughts and flooding rains. I love her far horizons, I love her jewel-sea, Her beauty and her terror, The wide brown land for me." (Dorothea Mackellar OBE, 1885-1968). This second stanza from Mackellar's famous poem "My Country", beautifully sums up the Australian environment. In late 2010-early 2011, the "droughts and flooding rains" were the perfect terms to describe the climatic variability and the resulting flooding impacts experienced in many parts of Queensland under an enhanced La Niña as part of the ENSO (El Niño-Southern Oscillation) climate pattern, with over 75% of Queensland being declared a disaster zone. This contrasts with the severe drought that had gripped many parts of Australia over the previous 8 years which saw water storage levels plummet, and resulted in 35% of Queensland being 'drought declared' as at April 2010. On the Darling Downs in southern Queensland, over 100,000 ha of land was inundated by the Condamine River due to flooding in early 2011. The river which is generally <100 m wide was seven kilometres wide at the widest point during the floods. However, the erosive impacts of the floods on largely tilled floodplains was relatively low with most erosion confined to the bed and banks of the river. In Grantham, where 13 lives were lost, flooding was especially hazardous because of the combined depth and velocity of floodwaters and the rapid rise of floodwaters across the floodplain. Floodwaters were ~2.0-2.5 m deep across the northern parts of the floodplain with a maximum velocity of ~2-3 m/s. The rate of rise was estimated at ~12 m/hour, indicating that it would have taken only 10-15 minutes to rise to full depth. However, despite detailed river and flood gauging in the more urbanised catchments such as the Brisbane River valley, this is by far the exception rather than the rule throughout mainland Australia. The Queensland floods highlight the pressing and urgent need for an accurate and more intensive network of river gauging and sediment monitoring. In a country of "droughts and flooding rains" and in the face of climate change, this need is now imperative.
Iranzo, Olga; Chakraborty, Saumen; Hemmingsen, Lars; Pecoraro, Vincent L
2011-01-19
Herein we report how de novo designed peptides can be used to investigate whether the position of a metal site along a linear sequence that folds into a three-stranded α-helical coiled coil defines the physical properties of Cd(II) ions in either CdS(3) or CdS(3)O (O-being an exogenous water molecule) coordination environments. Peptides are presented that bind Cd(II) into two identical coordination sites that are located at different topological positions at the interior of these constructs. The peptide GRANDL16PenL19IL23PenL26I binds two Cd(II) as trigonal planar 3-coordinate CdS(3) structures whereas GRANDL12AL16CL26AL30C sequesters two Cd(II) as pseudotetrahedral 4-coordinate CdS(3)O structures. We demonstrate how for the first peptide, having a more rigid structure, the location of the identical binding sites along the linear sequence does not affect the physical properties of the two bound Cd(II). However, the sites are not completely independent as Cd(II) bound to one of the sites ((113)Cd NMR chemical shift of 681 ppm) is perturbed by the metalation state (apo or [Cd(pep)(Hpep)(2)](+) or [Cd(pep)(3)](-)) of the second center ((113)Cd NMR chemical shift of 686 ppm). GRANDL12AL16CL26AL30C shows a completely different behavior. The physical properties of the two bound Cd(II) ions indeed depend on the position of the metal center, having pK(a2) values for the equilibrium [Cd(pep)(Hpep)(2)](+) → [Cd(pep)(3)](-) + 2H(+) (corresponding to deprotonation and coordination of cysteine thiols) that range from 9.9 to 13.9. In addition, the L26AL30C site shows dynamic behavior, which is not observed for the L12AL16C site. These results indicate that for these systems one cannot simply assign a "4-coordinate structure" and assume certain physical properties for that site since important factors such as packing of the adjacent Leu, size of the intended cavity (endo vs exo) and location of the metal site play crucial roles in determining the final properties of the bound Cd(II).
Etude de la variabilite des etoiles massives a l'aide de la photometrie et la spectroscopie
NASA Astrophysics Data System (ADS)
Lefevre, Laure
Les étoiles Wolf-Rayet (WR) de population I sont les descendants évolués des étoiles massives de type O. Elles présentent de larges raies en émission produites par des atomes ionisés qui forment le vent stellaire chaud, den se et rapide. Il y a deux classes principales d'étoiles WR: les WN, ou les raies de l'azote dominent, et les WC (WO) ou les raies du carbone (de l'oxygène) dominent. Des observations récentes ont révelé que les vents sont "fragmentés" à petite et grande échelle, ce qui pourrait être relié en partie à des petites surdensités situées dans le vent en expansion. Cette thèse de doctorat présente la détection, l'analyse et l'interprétation, avec des outils statistiques avancés, de la variabilité dans les courbes de lumière, les vitesses radiales, et les spectres de deux étoiles WR caractéristiques (WR137 et WR123) et dans les étoiles OB observées par le satellite HIPPARCOS. Une campagne de spectroscopie intensive a été réalisée en 1999-2000 pour améliorer notre connaissance des composantes orbitales de WR137 (WC7pd+O9) et étudier les vents des WR et les conditions de formation des poussières dans de tels milieux. Le premier volet de cette thèse a permis de déduire de ces observations une orbite spectroscopique d'environ 13 ans qui confirme et précise les précédents résultats. Grace à l'analyse des spectres de cette campagne, ce travail a mis à jour une deuxième période d'oscillation de très faible amplitude de 0.83 j dans les spectres de WR137. Celle-ci pourrait être reliée aux pulsations ou à de grandes structures qui tourneraient dans le vent comme dans l'étoile EZ CMa (WR6). Une analyse en ondelettes a également permis d'isoler et de suivre pendant plusieurs heures des structures en haut des raies de CIII et CIV dans le spectre de WR137. De plus, la corrélation croisée a permis de voir que les raies formées à différentes distances de l'étoile sont probablement reliées entre elles. Enfin l'analyse des surdentsités a permis de déduire un [beta] ~ 5 nettement supérieur à la valeur de [beta] [Asymptotically to] 1 que l'on trouve dans le vent des étoiles O ( loi -[beta] de vitesse dans le vent) . Le second volet de cette thèse concerne les étoiles WN, et plus particulièrement une WN8 du nom de WR123. Les WN8 se distinguent de leur congénères WR par plusieurs caractéristiques, dont le niveau de variabilité le plus élevé de leur classe. Du fait de ces particularités, les étoiles WN8, WR123 parmi elles, ont fait l'objet de nombreuses études photométriques et spectroscopiques. Cependant l'extrême complexité des variations, combinée avec une couverture temporelle souvent inadaptée a conduit à une longue série de résultats ambigus. Avec les données exceptionnelles collectées par le premier satellite astronomique canadien MOST, ce mémoire de thèse est maintenant en mesure de répondre a une grande partie des interrogations posées. L'étoile WR123 a été observée avec MOST en mode direct toutes les 30s pendant 38 jours en juin-juillet 2004. L'analyse de Fourier montre qu'aucun signal n'est stable pour plus de quelques jours dans le domaine des basses fréquences et qu'a ucune variabilité significative n'est présente dans le domaine des hautes fréquences jusqu'à un niveau de 0.2 mmag, un ordre de magnitude plus bas que les prédictions pour les pulsations à modes étranges. Par contre, i1 semble y avoir une période de 9.8 heures présente pendant toute la durée des observations. Cette période, probablement reliée à des pulsations, pourrait permettre de mieux comprendre ce qui se passe dans les étoiles WR et leurs vents. Le troisième volet de cette thèse consiste en une analyse des différents types de variabilité des étoiles OB du catalogue HIPPARCOS. Un échantillon non biaisé de 2497 étoiles a donc été sélectionné et analysé. Il apparait que le seuil de variabilité à 99.9% établi par le consortium HIPPARCOS n'est pas représentatif de cet échantillon. Il a donc été recalculé et les étoiles (classées dans 4 catégories principales de variabilité intrinsèque ou extrinsèque) "variables" ont été réanalysées lors de cette thèse. Ce travail a permis de confirmer des résultats obtenus à partir d'échantillons trop restreints sur les super-géantes OB, de confirmer que les étoiles OBe sont très variables ([approximate] 80%) et de soulever plusieurs questions intéressantes sur les étoiles OB de la séquence principale qui sont moins variables en moyenne. On note également plus de systèmes en contact que détachés parmi les OBMS et OBe et un nombre ~ identique parmi les OBSC.
NASA Astrophysics Data System (ADS)
Paul, Jagannath
Advent of ultrashort lasers made it possible to probe various scattering phenomena in materials that occur in a time scale on the order of few femtoseconds to several tens of picoseconds. Nonlinear optical spectroscopy techniques, such as pump-probe, transient four wave mixing (TFWM), etc., are very common to study the carrier dynamics in various material systems. In time domain, the transient FWM uses several ultrashort pulses separated by time delays to obtain the information of dephasing and population relaxation times, which are very important parameters that govern the carrier dynamics of materials. A recently developed multidimensional nonlinear optical spectroscopy is an enhanced version of TFWM which keeps track of two time delays simultaneously and correlate them in the frequency domain with the aid of Fourier transform in a two dimensional map. Using this technique, the nonlinear complex signal field is characterized both in amplitude and phase. Furthermore, this technique allows us to identify the coupling between resonances which are rather difficult to interpret from time domain measurements. This work focuses on the study of the coherent response of a two dimensional electron gas formed in a modulation doped GaAs/AlGaAs quantum well both at zero and at high magnetic fields. In modulation doped quantum wells, the excitons are formed as a result of the inter- actions of the charged holes with the electrons at the Fermi edge in the conduction band, leading to the formation of Mahan excitons, which is also referred to as Fermi edge singularity (FES). Polarization and temperature dependent rephasing 2DFT spectra in combination with TI-FWM measurements, provides insight into the dephasing mechanism of the heavy hole (HH) Mahan exciton. In addition to that strong quantum coherence between the HH and LH Mahan excitons is observed, which is rather surprising at this high doping concentration. The binding energy of Mahan excitons is expected to be greatly reduced and any quantum coherence be destroyed as a result of the screening and electron-electron interactions. Such correlations are revealed by the dominating cross-diagonal peaks in both one-quantum and two-quantum 2DFT spectra. Theoretical simulations based on the optical Bloch Equations (OBE) where many-body effects are included phenomenologically, corroborate the experimental results. Time-dependent density functional theory (TD-DFT) calculations provide insight into the underlying physics and attribute the observed strong quantum coherence to a significantly reduced screening length and collective excitations of the many-electron system. Furthermore, in semiconductors under the application of magnetic field, the energy states in conduction and valence bands become quantized and Landau levels are formed. We observe optical excitation originating from different Landau levels in the absorption spectra in an undoped and a modulation doped quantum wells. 2DFT measurements in magnetic field up to 25 Tesla have been performed and the spectra reveal distinct difference in the line shapes in the two samples. In addition, strong coherent coupling between landau levels is observed in the undoped sample. In order to gain deeper understanding of the observations, the experimental results are further supported with TD-DFT calculation.
Septotomy and Balloon Dilation to Treat Chronic Leak After Sleeve Gastrectomy: Technical Principles.
Campos, Josemberg Marins; Ferreira, Flávio Coelho; Teixeira, André F; Lima, Jones Silva; Moon, Rena C; D'Assunção, Marco Aurélio; Neto, Manoel Galvão
2016-08-01
Chronic leaks after laparoscopic sleeve gastrectomy (LSG) are often difficult to treat by endoscopy metallic stent. Septotomy has been indicated as an effective procedure, but the technical aspects have not been detailed in previous publications (Campos JM, Siqueira LT, Ferraz AA, et al., J Am Coll Surg 204(4):711, 2007; Baretta G, Campos J, Correia S, et al., Surg Endosc 29(7):1714-20, 2015; Campos JM, Pereira EF, Evangelista LF, et al., Obes Surg 21(10):1520-9, 2011). We herein present a video (6 min) demonstrating the maneuver principles of this technique, showing it as a safe and feasible approach. A 32-year-old male, with BMI 43.4 kg/m(2), underwent LSG. On the tenth POD, he presented with a leak and initially was managed with the following approach: laparoscopic exploration, drainage, endoclips, and 20-mm balloon dilation. However, the leak remained for a period of 6 months. On the endoscopy, a septum was identified between the leak site and gastric pouch, so it was decided to "reshape" this area by septotomy. Septotomy procedure: Sequential incisions were performed using argon plasma coagulation (APC) with 2.5 flow and 50 W (WEM, SP, Brazil) over the septum in order to allow communication between the perigastric cavity (leak site) and the gastric lumen. The principles below must be followed: (1) Scope position: the endoscopist's left hand holds the control body of the gastroscope while the right hand holds the insertion tube; the APC catheter has no need to be fixed. This avoids movements and unprogrammed maneuvers. (2) Before cutting, the septum is placed in the six o'clock position on the endoscopic view, by rotating the gastroscope. (3) The septum is sectioned until the bottom of the perigastric cavity (leak site). (4) That section is made towards the staple line. (5) Just after the septotomy, a Savory-Gilliard guidewire (Cook Medical, Indiana, USA) through the scope must be inserted until the duodenum, followed by 30-mm balloon (Rigiflex®, Boston Scientific, MA, USA) insertion. The balloon catheter must be firmly held during gradual inflation (maximum 10 psi) to avoid slippage and laceration. This allows increasing the gastric lumen. (6) Septotomy by electrocautery with a needle knife (Boston Scientific, MA, USA) can be made when an intensive fibrotic septum is present; bleeding is rare in this case. In this case, the endoclip previously used was removed from the septum with forceps to avoid heat transmission. Small staples visualized in the fistula orifice were not completely removed due to technical difficulties and friable tissue. Two sessions were performed in 15 days, resulting in leak closure. The patient was submitted to radiological control 1 week after the second session, which revealed fistula healing, without gastric stenosis. The nasoduodenal feeding tube remained for 7 days, when the patient started oral diet. This patient was followed for 18 months without recurrence. Septotomy and balloon dilation were initially performed on a difficult-to-treat chronic fistula after gastric bypass and named before as stricturotomy (Campos JM, Siqueira LT, Ferraz AA, et al., J Am Coll Surg 204(4):711, 2007). This procedure allows internal drainage of the fistula and deviates oral intake to the pouch. In addition, achalasia balloon dilation treats strictures and axis deviation of the gastric chamber, promoting reduction of the intragastric pressure. Septotomy and balloon dilation are technically feasible and might be useful in selected cases for closure of chronic leaks after LSG.
PREFACE: 27th Summer School and International Symposium on the Physics of Ionized Gases (SPIG 2014)
NASA Astrophysics Data System (ADS)
Marić, Dragana; Milosavljević, Aleksandar R.; Mijatović, Zoran
2014-12-01
This volume of Journal of Physics: Conference Series contains a selection of papers presented at the 27th Summer School and International Symposium on the Physics of Ionized Gases - SPIG 2014, as General Invited Lectures, Topical Invited Lectures, Progress Reports and associated Workshop Lectures. The conference was held in Belgrade, Serbia, from 26-29 August 2014 at the Serbian Academy of Sciences and Arts. It was organized by the Institute of Physics Belgrade, University of Belgrade and Serbian Academy of Sciences and Arts, under the auspices of the Ministry of Education, Science and Technological Development, Republic of Serbia. A rare virtue of a SPIG conference is that it covers a wide range of topics, bringing together leading scientists worldwide to present and discuss state-of-the art research and the most recent applications, thus stimulating a modern approach of interdisciplinary science. The Invited lectures and Contributed papers are related to the following research fields: 1. Atomic Collision Processes (Electron and Photon Interactions with Atomic Particles, Heavy Particle Collisions, Swarms and Transport Phenomena) 2. Particle and Laser Beam Interactions with Solids (Atomic Collisions in Solids, Sputtering and Deposition, Laser and Plasma Interaction with Surfaces) 3. Low Temperature Plasmas (Plasma Spectroscopy and other Diagnostic Methods, Gas Discharges, Plasma Applications and Devices) 4. General Plasmas (Fusion Plasmas, Astrophysical Plasmas and Collective Phenomena) Additionally, the 27th SPIG encompassed three workshops that are closely related to the scope of the conference: • The Workshop on Dissociative Electron Attachment (DEA) - Chaired by Prof. Nigel J Mason, OBE, The Open University, United Kingdom • The Workshop on X-ray Interaction with Biomolecules in Gas Phase (XiBiGP), Chaired by Dr. Christophe Nicolas, Synchrotron SOLEIL, France • The 3rd International Workshop on Non-Equilibrium Processes (NonEqProc) - Chaired by Prof. Zoran Lj. Petrović, Institute of Physics Belgrade, University of Belgrade, Serbia The Editors would like to thank the members of the Scientific and Advisory Committees of SPIG conference for their efforts in proposing the program of the conference and to the referees that have reviewed submitted papers, as well as the chairmen of the associated workshops for their efforts and help in organizing them and a selection of excellent invited talks. We particularly acknowledge the efforts of all the members of the Local Organizing Committee in the organization of the Conference. We are grateful to all sponsors of the conference: SOLEIL synchrotron, RoentDek Handels GmbH, Klett Publishing House Ltd, Springer (EPJD and EPJ TI), IOP Publishing (IOP Conference Series), DEA club, Austrian Cultural Forum Belgrade, Institut français de Serbie and Collegium Hungaricum Belgrade. Holding on to a long tradition is never easy and the only way to achieve that is to have a large number of people who appreciate the conference, so we would like to thank all the invited speakers and participants for taking part in the 27th SPIG conference. Editors of the issue: Dr Dragana Marić (Instutute of Physics Belgrade, University of Belgrade Dr Aleksandar R. Milosavljević (Instutute of Physics Belgrade, University of Belgrade) Prof Zoran Mijatović (Faculty of Sciences, University of Novi Sad)
Pliatskidou, S; Samakouri, M; Kalamara, E; Papageorgiou, E; Koutrouvi, K; Goulemtzakis, C; Nikolaou, E; Livaditis, M
2015-01-01
The aim of this study is to examine the validity of the Greek version of the Eating Disorder Examination Questionnaire 6.0 (EDE-Q-6.0) in a sample of adolescent pupils. EDE-Q is a self- report instrument that assesses attitudes and behaviors related to Eating Disorders (EDs). A two-stage identification protocol has been applied to the 16 schools that agreed to participate in the present study. Initially, 2058 adolescents, in class under the supervision of one research assistant and one teacher, completed a Questionnaire on socio-demographic data, the Greek EDE-Q-6.0 and the Greek Eating Attitudes Test (EAT-26) while their weight and height were measured. Six-hundred and twenty six participants, who had scores on EAT-26≥20 and/or were underweight or overweight, were considered as "possible-cases" while the remaining 1432 pupils of the sample were thought as "non-possible cases". At the second stage, parents of 66 of the participants identified as possible-cases as well as parents of 72 participants from 358 controls randomly selected from the sample of "non-possible cases" agreed that their children would be examined by means of Best Estimate Diagnostic Procedure. Participants meeting DSM-IV-TR Eating Disorders criteria were identified. Receiver Operating Characteristics (ROC) analysis was applied to reveal EDE-Q's criterion validity. The kappa statistic test was used as measure of agreement between categorical variables at EDE-Q and at interview (the presence of objective binge eating episode, of self-induced vomiting, the use of laxatives and of excessive exercise). The Discriminant and Convergent validity were assessed using the non-parametric Mann-Whitney U test and by means of the Spearman's correlation coefficient, respectively. Nineteen cases of EDs were identified [one case of Anorexia Nervosa (AN), 13 cases of Eating Disorder Not Otherwise Specified (EDNOS), 5 cases of Binge Eating Disorder (BED)]. At the cut off point of 2.6125 on the EDE-Q's global scale the instrument screens with a sensitivity (Se) of 89.5% and a specificity (Sp) of 73.1%, a Positive Predictive Value (PPV) of 34.7% and a Negative Predictive Value (NPV) of 97.8% The same analyses for both sexes revealed a cut-off point of 2.612 for females and of 3.125 for males on the global EDE-Q-6.0 score (Se=84.62%, Sp=73.33% for females and Se=83.33%, Sp= 84.09% for males), yielding a PPV and a NPV of 35.5% and of 96.5% for females and 41.7% and 97.4% for males, respectively. A very low agreement level, between EDE-Q and interview, was observed regarding the presence of objective bulimic episodes (OBEs) [k=0.191 (SE=0.057)] and the unhealthy weight control behaviors [k=0.295 (SE=0.073)]. Positive correlations were found between EAT-26 and EDE-Q-6.0 for both global scale and subscales (rho=0.50-0.57). The results suggest that EDE-Q-6.0, when using its global score, appears to be a proper screening tool for assessing the core psychopathology of eating disorders in community samples in two-stage screening studies since it distinguishes very well the cases from the non-cases. However, the assessment of the presence and frequency of pathological behaviours which characterize EDs appears to be problematic since adolescents, especially the younger ones, misunderstood terms like large amount of food and loss of control or misinterpret the motivation for excessive exercise. Therefore, marked discrepancies were observed between pathological behaviors self-reported at questionnaire and those detected at interview. We may assume that giving participants more information regarding the definition of these concepts may increase the accuracy with which the participants report these behaviors.
NASA Astrophysics Data System (ADS)
de Rigo, Daniele; Corti, Paolo; Caudullo, Giovanni; McInerney, Daniel; Di Leo, Margherita; San-Miguel-Ayanz, Jesús
2013-04-01
Interfacing science and policy raises challenging issues when large spatial-scale (regional, continental, global) environmental problems need transdisciplinary integration within a context of modelling complexity and multiple sources of uncertainty [1]. This is characteristic of science-based support for environmental policy at European scale [1], and key aspects have also long been investigated by European Commission transnational research [2-5]. Parameters ofthe neededdata- transformations ? = {?1????m} (a.5) Wide-scale transdisciplinary modelling for environment. Approaches (either of computational science or of policy-making) suitable at a given domain-specific scale may not be appropriate for wide-scale transdisciplinary modelling for environment (WSTMe) and corresponding policy-making [6-10]. In WSTMe, the characteristic heterogeneity of available spatial information (a) and complexity of the required data-transformation modelling (D- TM) appeal for a paradigm shift in how computational science supports such peculiarly extensive integration processes. In particular, emerging wide-scale integration requirements of typical currently available domain-specific modelling strategies may include increased robustness and scalability along with enhanced transparency and reproducibility [11-15]. This challenging shift toward open data [16] and reproducible research [11] (open science) is also strongly suggested by the potential - sometimes neglected - huge impact of cascading effects of errors [1,14,17-19] within the impressively growing interconnection among domain-specific computational models and frameworks. From a computational science perspective, transdisciplinary approaches to integrated natural resources modelling and management (INRMM) [20] can exploit advanced geospatial modelling techniques with an awesome battery of free scientific software [21,22] for generating new information and knowledge from the plethora of composite data [23-26]. From the perspective of the science-policy interface, INRMM should be able to provide citizens and policy-makers with a clear, accurate understanding of the implications of the technical apparatus on collective environmental decision-making [1]. Complexity of course should not be intended as an excuse for obscurity [27-29]. Geospatial Semantic Array Programming. Concise array-based mathematical formulation and implementation (with array programming tools, see (b) ) have proved helpful in supporting and mitigating the complexity of WSTMe [40-47] when complemented with generalized modularization and terse array-oriented semantic constraints. This defines the paradigm of Semantic Array Programming (SemAP) [35,36] where semantic transparency also implies free software use (although black-boxes [12] - e.g. legacy code - might easily be semantically interfaced). A new approach for WSTMe has emerged by formalizing unorganized best practices and experience-driven informal patterns. The approach introduces a lightweight (non-intrusive) integration of SemAP and geospatial tools (c) - called Geospatial Semantic Array Programming (GeoSemAP). GeoSemAP (d) exploits the joint semantics provided by SemAP and geospatial tools to split a complex D- TM into logical blocks which are easier to check by means of mathematical array-based and geospatial constraints. Those constraints take the form of precondition, invariant and postcondition semantic checks. This way, even complex WSTMe may be described as the composition of simpler GeoSemAP blocks, each of them structured as (d). GeoSemAP allows intermediate data and information layers to be more easily an formally semantically described so as to increase fault-tolerance [17], transparency and reproducibility of WSTMe. This might also help to better communicate part of the policy-relevant knowledge, often difficult to transfer from technical WSTMe to the science-policy interface [1,15]. References de Rigo, D., 2013. Behind the horizon of reproducible integrated environmental modelling at European scale: ethics and practice of scientific knowledge freedom. F1000 Research. To appear as discussion paper. Funtowicz, S. O., Ravetz, J. R., 1994. Uncertainty, complexity and post-normal science. Environmental Toxicology and Chemistry 13 (12), 1881-1885. http://dx.doi.org/10.1002/etc.5620131203 Funtowicz, S. O., Ravetz, J. R., 1994. The worth of a songbird: ecological economics as a post-normal science. Ecological Economics 10 (3), 197-207. http://dx.doi.org/10.1016/0921-8009(94)90108-2 Funtowicz, S. O., Ravetz, J. R., 2003. Funtowicz, S., Ravetz, J. (2003). Post-normal science. International Society for Ecological Economics, Internet Encyclopaedia of Ecological Economics Ravetz, J., 2004. The post-normal science of precaution. Futures 36 (3), 347-357. http://dx.doi.org/10.1016/S0016-3287(03)00160-5 van der Sluijs, J. P., 2012. Uncertainty and dissent in climate risk assessment: A Post-Normal perspective. Nature and Culture 7 (2), 174-195. http://dx.doi.org/10.3167/nc.2012.070204 Ulieru, M., Doursat, R., 2011. Emergent engineering: a radical paradigm shift. International Journal of Autonomous and Adaptive Communications Systems 4 (1), 39-60. http://dx.doi.org/10.1504/IJAACS.2011.037748 Turner, M. G., Dale, V. H., Gardner, R. H., Dec. 1989. Predicting across scales: Theory development and testing. Landscape Ecology 3 (3), 245-252. http://dx.doi.org/10.1007/BF00131542 Zhang, X., Drake, N. A., Wainwright, J., 2004. Scaling issues in environmental modelling. In: Wainwright, J., Mulligan, M. (Eds.), Environmental modelling : finding simplicity in complexity. Wiley. ISBN: 9780471496182 Bankes, S. C., 2002. Tools and techniques for developing policies for complex and uncertain systems. Proceedings of the National Academy of Sciences of the United States of America 99 (Suppl 3), 7263-7266. http://dx.doi.org/10.1073/pnas.092081399 Peng, R. D., 2011. Reproducible research in computational science. Science 334 (6060), 1226-1227. http://dx.doi.org/10.1126/science.1213847 Morin, A., Urban, J., Adams, P. D., Foster, I., Sali, A., Baker, D., Sliz, P., 2012. Shining light into black boxes. Science 336 (6078), 159-160. http://dx.doi.org/10.1126/science.1218263 Nature, 2011. Devil in the details. Nature 470 (7334), 305-306. http://dx.doi.org/10.1038/470305b Stodden, V., 2012. Reproducible research: Tools and strategies for scientific computing. Computing in Science and Engineering 14, 11-12. http://dx.doi.org/10.1109/MCSE.2012.82 de Rigo, D., Corti, P., Caudullo, G., McInerney, D., Di Leo, M., San-Miguel-Ayanz, J., (exp. 2013). Supporting Environmental Modelling and Science-Policy Interface at European Scale with Geospatial Semantic Array Programming. In prep. Molloy, J. C., 2011. The open knowledge foundation: Open data means better science. PLoS Biology 9 (12), e1001195+. http://dx.doi.org/10.1371/journal.pbio.1001195 de Rigo, D., 2013. Software Uncertainty in Integrated Environmental Modelling: the role of Semantics and Open Science. Geophysical Research Abstracts 15, EGU General Assembly 2013. Cerf, V. G., 2012. Where is the science in computer science? Commun. ACM 55 (10), 5. http://dx.doi.org/10.1145/2347736.2347737 Wilson, G., 2006. Where's the real bottleneck in scientific computing? American Scientist 94 (1), 5+. http://dx.doi.org/10.1511/2006.1.5 de Rigo, D. 2012. Integrated Natural Resources Modelling and Management: minimal redefinition of a known challenge for environmental modelling. Excerpt from the Call for a shared research agenda toward scientific knowledge freedom, Maieutike Research Initiative. http://www.citeulike.org/groupfunc/15400/home Stallman, R. M., 2005. Free community science and the free development of science. PLoS Med 2 (2), e47+. http://dx.doi.org/10.1371/journal.pmed.0020047 Stallman, R. M., 2009. Viewpoint: Why "open source" misses the point of free software. Communications of the ACM 52 (6), 31-33. http://dx.doi.org/10.1145/1516046.1516058 (free access version: http://www.gnu.org/philosophy/open-source-misses-the-point.html ) Rodriguez Aseretto, D., Di Leo, M., de Rigo, D., Corti, P., McInerney, D., Camia, A., San Miguel-Ayanz, J., 2013. Free and Open Source Software underpinning the European Forest Data Centre. Geophysical Research Abstracts 15, EGU General Assembly 2013. Giovando, C., Whitmore, C., Camia, A., San-Miguel-Ayanz, J., 2010. Enhancing the European Forest Fire Information System (EFFIS) with open source software. In: FOSS4G 2010. http://2010.foss4g.org/presentations_show.php?id=3693 Corti, P., San-Miguel-Ayanz, J., Camia, A., McInerney, D., Boca, R., Di Leo, M., 2012. Fire news management in the context of the European Forest Fire Information System (EFFIS). In: proceedings of "Quinta conferenza italiana sul software geografico e sui dati geografici liberi" (GFOSS DAY 2012). http://files.figshare.com/229492/Fire_news_management_in_the_context_of_EFFIS.pdf McInerney, D., Bastin, L., Diaz, L., Figueiredo, C., Barredo, J. I., San-Miguel-Ayanz, J., 2012. Developing a forest data portal to support Multi-Scale decision making. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 5 (6), 1-8. http://dx.doi.org/10.1109/JSTARS.2012.2194136 Morin, A., Urban, J., Adams, P. D., Foster, I., Sali, A., Baker, D., Sliz, P., (2012). Shining light into black boxes. Science 336 (6078), 159-160. http://dx.doi.org/10.1126/science.1218263 Stodden, V., 2011. Trust your science? Open your data and code. Amstat News July 2011, 21-22. http://www.stanford.edu/ vcs/papers/TrustYourScience-STODDEN.pdf van der Sluijs, J., 2005. Uncertainty as a monster in the science-policy interface: four coping strategies. Water Science & Technology 52 (6), 87-92. http://www.iwaponline.com/wst/05206/wst052060087.htm Iverson, K. E., 1980. Notation as a tool of thought. Communications of the ACM 23 (8), 444-465. http://awards.acm.org/images/awards/140/articles/9147499.pdf Eaton, J. W., Bateman, D., Hauberg, S., 2008. GNU Octave: a high-level interactive language for numerical computations. Network Theory. ISBN: 9780954612061 Eaton, J. W., 2012. GNU octave and reproducible research. Journal of Process Control 22 (8), 1433-1438. http://dx.doi.org/10.1016/j.jprocont.2012.04.006 R Development Core Team, 2011. The R reference manual. Network Theory Ltd. Vol. 1, ISBN: 978-1-906966-09-6. Vol. 2, ISBN: 978-1-906966-10-2. Vol. 3, ISBN: 978-1-906966-11-9. Vol. 4, ISBN: 978-1-906966-12-6. Ramey, C., Fox, B., 2006. Bash reference manual : reference documentation for Bash edition 2.5b, for Bash version 2.05b. Network Theory Limited. ISBN: 978-0-9541617-7-4. de Rigo, D., 2012. Semantic array programming for environmental modelling: Application of the mastrave library. In: Seppelt, R., Voinov, A. A., Lange, S., Bankamp, D. (Eds.), International Environmental Modelling and Software Society (iEMSs) 2012 International Congress on Environmental Modelling and Software. Managing Resources of a Limited Planet: Pathways and Visions under Uncertainty, Sixth Biennial Meeting. pp. 1167-1176. http://www.iemss.org/iemss2012/proceedings/D3_1_0715_deRigo.pdf de Rigo, D., 2012. Semantic Array Programming with Mastrave - Introduction to Semantic Computational Modelling. http://mastrave.org/doc/MTV-1.012-1.htm Van Rossum, G., Drake, F.J., 2011. Python Language Ref. Manual, Network Theory Ltd. ISBN: 0954161785. http://www.network-theory.co.uk/docs/pylang/ The Scipy community, 2012. NumPy Reference Guide. SciPy.org. http://docs.scipy.org/doc/numpy/reference/ The Scipy community, 2012. SciPy Reference Guide. SciPy.org. http://docs.scipy.org/doc/scipy/reference/ de Rigo, D., Castelletti, A., Rizzoli, A. E., Soncini-Sessa, R., Weber, E., Jul. 2005. A selective improvement technique for fastening neuro-dynamic programming in water resources network management. In: Zítek, P. (Ed.), Proceedings of the 16th IFAC World Congress. Vol. 16. International Federation of Automatic Control (IFAC), pp. 7-12. http://dx.doi.org/10.3182/20050703-6-CZ-1902.02172 de Rigo, D., Bosco, C., 2011. Architecture of a Pan-European Framework for Integrated Soil Water Erosion Assessment. Vol. 359 of IFIP Advances in Information and Communication Technology. Springer Boston, Berlin, Heidelberg, Ch. 34, pp. 310-318. http://dx.doi.org/10.1007/978-3-642-22285-6_34 San-Miguel-Ayanz, J., Schulte, E., Schmuck, G., Camia, A., Strobl, P., Liberta, G., Giovando, C., Boca, R., Sedano, F., Kempeneers, P., McInerney, D., Withmore, C., de Oliveira, S. S., Rodrigues, M., Durrant, T., Corti, P., Oehler, F., Vilar, L., Amatulli, G., Mar. 2012. Comprehensive monitoring of wildfires in Europe: The European Forest Fire Information System (EFFIS). In: Tiefenbacher, J. (Ed.), Approaches to Managing Disaster - Assessing Hazards, Emergencies and Disaster Impacts. InTech, Ch. 5. http://dx.doi.org/10.5772/28441 de Rigo, D., Caudullo, G., San-Miguel-Ayanz, J., Stancanelli, G., 2012. Mapping European forest tree species distribution to support pest risk assessment. In: Baker, R., Koch, F., Kriticos, D., Rafoss, T., Venette, R., van der Werf, W. (Eds.), Advancing risk assessment models for invasive alien species in the food chain: contending with climate change, economics and uncertainty. Bioforsk FOKUS 7. OECD Co-operative Research Programme on Biological Resource Management for Sustainable Agricultural Systems; Bioforsk - Norwegian Institute for Agricultural and Environmental Research. http://www.pestrisk.org/2012/BioforskFOKUS7-10_IPRMW-VI.pdf Estreguil, C., Caudullo, G., de Rigo, D., Whitmore, C., San-Miguel-Ayanz, J., 2012. Reporting on European forest fragmentation: Standardized indices and web map services. IEEE Earthzine. http://www.earthzine.org/2012/07/05/reporting-on-european-forest-fragmentation-standardized-indices-and-web-map-services/ Estreguil, C., de Rigo, D. and Caudullo, G. (exp. 2013). Towards an integrated and reproducible characterisation of habitat pattern. Submitted to Environmental Modelling & Software Amatulli, G., Camia, A., San-Miguel-Ayanz, J., 2009. Projecting future burnt area in the EU-mediterranean countries under IPCC SRES A2/B2 climate change scenarios (JRC55149), 33-38 de Rigo, D., Caudullo, G., Amatulli, G., Strobl, P., San-Miguel-Ayanz, J. (exp. 2013). Modelling tree species distribution in Europe with constrained spatial multi-frequency analysis. In prep. GRASS Development Team, 2012. Geographic Resources Analysis Support System (GRASS) Software. Open Source Geospatial Foundation. http://grass.osgeo.org http://www.spatial-ecology.net/dokuwiki/doku.php?id=wiki:firemod Neteler, M., Bowman, M. H., Landa, M., Metz, M., 2012. GRASS GIS: A multi-purpose open source GIS. Environmental Modelling & Software 31, 124-130. http://dx.doi.org/10.1016/j.envsoft.2011.11.014 Neteler, M., Mitasova, H., 2008. Open source GIS a GRASS GIS approach. ISBN: 978-0-387-35767-6 Warmerdam, F., 2008. The geospatial data abstraction library. In: Hall, G. B., Leahy, M. G. (Eds.), Open Source Approaches in Spatial Data Handling. Vol. 2 of Advances in Geographic Information Science. Springer Berlin Heidelberg, pp. 87-104. http://dx.doi.org/10.1007/978-3-540-74831-15 Open Geospatial Consortium, 2007. OpenGIS Web Processing Service version 1.0.0. No. OGC 05-007r7 in OpenGIS Standard. Open Geospatial Consortium (OGC). http://portal.opengeospatial.org/files/?artifact_id=24151 Hazzard, E., 2011. Openlayers 2.10 beginner's guide. Packt Publishing. ISBN: 1849514127 Obe, R., Hsu, L., 2011. PostGIS in Action. Manning Publications. http://dl.acm.org/citation.cfm?id=2018871 Sutton, T., 2009. Clipping data from postgis. linfiniti.com Open Source Geospatial Solutions. http://linfiniti.com/2009/09/clipping-data-from-postgis/
Free and Open Source Software underpinning the European Forest Data Centre
NASA Astrophysics Data System (ADS)
Rodriguez Aseretto, Dario; Di Leo, Margherita; de Rigo, Daniele; Corti, Paolo; McInerney, Daniel; Camia, Andrea; San-Miguel-Ayanz, Jesús
2013-04-01
Worldwide, governments are growingly focusing [1] on free and open source software (FOSS) as a move toward transparency and the freedom to run, copy, study, change and improve the software [2]. The European Commission (EC) is also supporting the development of FOSS (see e.g., [3]). In addition to the financial savings, FOSS contributes to scientific knowledge freedom in computational science (CS) [4] and is increasingly rewarded in the science-policy interface within the emerging paradigm of open science [5-8]. Since complex computational science applications may be affected by software uncertainty [4,9-11], FOSS may help to mitigate part of the impact of software errors by CS community-driven open review, correction and evolution of scientific code [10,12-15]. The continental scale of EC science-based policy support implies wide networks of scientific collaboration. Thematic information systems also may benefit from this approach within reproducible [16] integrated modelling [4]. This is supported by the EC strategy on FOSS: "for the development of new information systems, where deployment is foreseen by parties outside of the EC infrastructure, [F]OSS will be the preferred choice and in any case used whenever possible" [17]. The aim of this contribution is to highlight how a continental scale information system may exploit and integrate FOSS technologies within the transdisciplinary research underpinning such a complex system. A European example is discussed where FOSS innervates both the structure of the information system itself and the inherent transdisciplinary research for modelling the data and information which constitute the system content. The information system. The European Forest Data Centre (EFDAC, http://forest.jrc.ec.europa.eu/efdac/) has been established at the EC Joint Research Centre (JRC) as the focal point for forest data and information in Europe to supply European decision-makers with processed, quality checked and timely policy relevant forest data and information (see also [18]). A set of web-based tools allow accessing the information located in EFDAC. The following applications - running on GNU/Linux platforms - are the core elements of EFDAC: In (a.1) a metadata client allows users to search for EFDAC related spatial datasets while (a.2) is a customized web map service that allows the user to visualize, navigate and query available maps and derived geo-datasets on several forest-related topics. The database system currently relies on ORACLE and PostgreSQL [24] with PostGIS [25]. EFFIS (a.3) [26-33] is a comprehensive system covering the full cycle of forest-fire management. The system supports forest-fire prevention and fighting in Europe, North Africa and Middle East countries through the provision of timely and reliable information on forest-fires [29,30,32]. Within EFFIS, UMN Mapserver [34] is used for the management and publication of the fire behavior forecast and the other fire-related layers in a wide range of formats including INSPIRE and Open Geospatial Consortium (OGC) standards such as: Transdisciplinary modelling research. The EFDAC portal [39] provides data and information which rely on coordinated research [40-50] on wide-scale transdisciplinary modelling for environment (WSTMe) [51]. This contributed to advanced computational modelling approaches such as morphological spatial pattern analysis (MSPA) [52-54] and geospatial semantic array programming (GeoSemAP) [51,55]. FOSS is here essential. For example, GeoSemAP is based on a semantically-enhanced [56,57] joint use of geospatial and array programming [58] tools (c) where semantic transparency also implies FOSS use. References Hahn, R. W., Bessen, J., Evans, D. S., Lessig, L., Smith, B. L., 2009. Government Policy Toward Open Source Software. Hahn, R. W. (Ed.). ISBN: 0-8157-3393-3 http://dx.doi.org/10.2139/ssrn Free Software Foundation, 2012. What is free software? http://www.gnu.org/philosophy/free-sw.html (revision 1.118 archived at http://www.webcitation.org/6DXqCFAN3 ) Kroes, N., 2010. How to get more interoperability in Europe. In: Open Forum Europe 2010 Summit - Openness at the heart of the EU Digital Agenda. No. SPEECH/10/300. European Commission press release. http://europa.eu/rapid/press-release_SPEECH-10-300_en.pdf de Rigo, D., 2013. Behind the horizon of reproducible integrated environmental modelling at European scale: ethics and practice of scientific knowledge freedom. F1000 Research. To appear as discussion paper Stallman, R. M., 2005. Free community science and the free development of science. PLoS Med 2 (2), e47+. http://dx.doi.org/10.1371/journal.pmed.0020047 Cai, Y., Judd, K. L., Lontzek, T. S., 2012. Open science is necessary. Nature Climate Change 2 (5), 299. http://dx.doi.org/10.1038/nclimate1509 Morin, A., Urban, J., Adams, P. D., Foster, I., Sali, A., Baker, D., Sliz, P., 2012. Shining light into black boxes. Science 336 (6078), 159-160. http://dx.doi.org/10.1126/science.1218263 Ince, D. C., Hatton, L., Graham-Cumming, J., 2012. The case for open computer programs. Nature 482 (7386), 485-488. http://dx.doi.org/10.1038/nature10836 Lehman, M. M., Ramil, J. F., 2002. Software uncertainty. In: Bustard, D., Liu, W., Sterritt, R. (Eds.), Soft-Ware 2002: Computing in an Imperfect World. Vol. 2311 of Lecture Notes in Computer Science. Springer Berlin / Heidelberg, Berlin, Heidelberg, Ch. 14, pp. 477-514. http://dx.doi.org/10.1007/3-540-46019-5_14 Hatton, L., 2012. Defects, scientific computation and the scientific method uncertainty quantification in scientific computing. Vol. 377 of IFIP Advances in Information and Communication Technology. Springer Boston, Berlin, Heidelberg, Ch. 8, pp. 123-138. http://dx.doi.org/10.1007/978-3-642-32677-68 de Rigo, D., 2013. Software Uncertainty in Integrated Environmental Modelling: the role of Semantics and Open Science. Geophysical Research Abstracts 15, EGU General Assembly 2013 Hatton, L., 2007. The chimera of software quality. Computer 40 (8), 104-103. http://dx.doi.org/10.1109/MC.2007.292 Cai, Y., Judd, K. L., Lontzek, T. S., May 2012. Open science is necessary. Nature Climate Change 2 (5), 299. http://dx.doi.org/10.1038/nclimate1509 Sonnenburg, S., Braun, M. L., Ong, C. S., Bengio, S., Bottou, L., Holmes, G., LeCun, Y., Müller, K. R., Pereira, F., Rasmussen, C. E., Rätsch, G., Schölkopf, B., Smola, A., Vincent, P., Weston, J., Williamson, R., Dec. 2007. The need for open source software in machine learning. J. Mach. Learn. Res. 8, 2443-2466. http://jmlr.csail.mit.edu/papers/v8/sonnenburg07a.html de Vos, M. G., Janssen, S. J. C., van Bussel, L. G. J., Kromdijk, J., van Vliet, J., Top, J. L., Dec. 2011. Are environmental models transparent and reproducible enough? In: Chan, F., Marinova, D., Anderssen, R. S. (Eds.), MODSIM2011, 19th International Congress on Modelling and Simulation. Modelling and Simulation Society of Australia and New Zealand, pp. 2954-2961. http://www.mssanz.org.au/modsim2011/G7/devos.pdf Peng, R. D., 2011. Reproducible research in computational science. Science 334 (6060), 1226-1227. http://dx.doi.org/10.1126/science.1213847 European Commission, 2011. Strategy for internal use of OSS at the EC. European Commission, Directorate-General for Informatics (DIGIT). http://ec.europa.eu/dgs/informatics/oss_tech/index_en.htm (archived at: http://www.webcitation.org/6DXuBeTAU ) European Commission, 2006. European Union forest action plan. No. COM(2006) 302 final. Communication from the Commission. http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2006:0302:FIN:EN:HTML Ticheler, J., Hielkema, J. U., 2007. GeoNetwork opensource. OSGeo Journal 2, 1-5. http://journal.osgeo.org/index.php/journal/article/viewFile/86/69 European Parliament, 2007. Directive 2007/2/EC of the European Parliament and of the Council of 14 march 2007 establishing an infrastructure for spatial information in the european community (INSPIRE). Official Journal of the European Union 50 (L 108), 1-14. http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2007:108:0001:0014:EN:PDF European Commission, 2008. Commission regulation (EC) no 1205/2008 of 3 december 2008 implementing directive 2007/2/EC of the european parliament and of the council as regards metadata. Official Journal of the European Union 51 (L 326), 12-30. http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2008:326:0012:0030:EN:PDF Hazzard, E., 2011. Openlayers 2.10 beginner's guide. Packt Publishing. ISBN: 9781849514125 Holovaty, A., Kaplan-Moss, J., 2009. The definitive guide to Django: Web development done right. Apress (distributed by Springer-Verlag). ISBN: 9781590597255. http://dl.acm.org/citation.cfm?id=1572516 Worsley, J. C., Drake, J. D., 2002. Practical PostgreSQL: a hardened, robust, open source database. O'Reilly. ISBN: 1565928466. http://dl.acm.org/citation.cfm?id=580258 Obe, R., Hsu, L., 2011. PostGIS in Action. Manning Publications. ISBN:1935182269. http://dl.acm.org/citation.cfm?id=2018871 San-Miguel-Ayanz, J., 2010. Wildfires in Europe: The analysis of past and future trends within the European Forest Fire Information System. In: EGU General Assembly Conference Abstracts. Vol. 12 of EGU General Assembly Conference Abstracts. pp. 15401+. http://meetingorganizer.copernicus.org/EGU2010/EGU2010-15401.pdf Camia, A., Durrant Houston, T., San-Miguel-Ayanz, J., 2010. The European fire database: development, structure and implementation. In: 6th International Conference on Forest Fire Research. No. A20. Coimbra, Portugal Whitmore, C., Camia, A., San-Miguel-Ayanz, J., 2010. Enhancing the European Forest Fire Information System (EFFIS) with open source software. In: FOSS4G 2010. Barcelona, Spain. http://2010.foss4g.org/presentations_show.php?id=3693 San-Miguel-Ayanz, J., Schulte, E., Schmuck, G., Camia, A., 2012. The European Forest Fire Information System in the context of environmental policies of the European Union. Forest Policy and Economics. http://dx.doi.org/10.1016/j.forpol.2011.08.012 San-Miguel-Ayanz, J., Schulte, E., Schmuck, G., Camia, A., Strobl, P., Liberta, G., Giovando, C., Boca, R., Sedano, F., Kempeneers, P., McInerney, D., Withmore, C., de Oliveira, S. S., Rodrigues, M., Durrant, T., Corti, P., Oehler, F., Vilar, L., Amatulli, G., 2012. Comprehensive monitoring of wildfires in Europe: The European Forest Fire Information System (EFFIS). In: Tiefenbacher, J. (Ed.), Approaches to Managing Disaster - Assessing Hazards, Emergencies and Disaster Impacts. InTech, Ch. 5. http://dx.doi.org/10.5772/28441 Giovando, C., Whitmore, C., Camia, A., San-Miguel-Ayanz, J., 2010. Enhancing the European forest fire information system (EFFIS) with open source software. In: FOSS4G 2010. http://2010.foss4g.org/presentations_show.php?id=3693 Corti, P., San-Miguel-Ayanz, J., Camia, A., McInerney, D., Boca, R., Di Leo, M., 2012. Fire news management in the context of the European Forest Fire Information System (EFFIS). In: proceedings of "Quinta conferenza italiana sul software geografico e sui dati geografici liberi" (GFOSS DAY 2012). http://dx.doi.org/10.6084/m9.figshare.101918 Amatulli, G., Camia, A., 2007. Exploring the relationships of fire occurrence variables by jeans of CART and MARS models. In: Proceedings of the 4th International Wildland Fire Conference, Sevilla, Spain, 13-18 May 2007. http://www.fire.uni-freiburg.de/sevilla-2007/contributions/doc/cd/SESIONES_TEMATICAS/ST4/Amatulli_Camia_ITALY.pdf MapServer, http://mapserver.org/ Open Geospatial Consortium, 2006. OpenGIS Web Map Service version 1.3.0. No. OGC 06-042 in OpenGIS Standard. Open Geospatial Consortium (OGC). http://portal.opengeospatial.org/files/?artifact_id=14416 http://www.opengeospatial.org/standards/wms
A short history of the Australian Society of Soil Science
NASA Astrophysics Data System (ADS)
Bennison, Linda
2013-04-01
In 1955 a resolution, "that the Australian Society of Soil Science be inaugurated as from this meeting" was recorded in Melbourne Australia. The following year in Queensland, the first official meeting of the Society took place with a Federal Executive and Presidents from the Australian Capital Territory, New South Wales, Queensland, South Australian and Victorian branches forming the Federal Council. In later years the executive expanded with the addition of the Western Australia branch in 1957, the Riverina Branch in 1962 and most recently the Tasmania Branch in 2008. The objects of the Society were 1) the advancement of soil science and studies therein with particular reference to Australia and 2) to provide a link between soil scientists and kindred bodies within Australia and between them and other similar organisations in other countries. Membership was restricted to persons engaged in the scientific study of the soil and has grown steadily from to 147 members in 1957 to 875 members in 2012. The first issue of the Society newsletter, Soils News, was published in January 1957 and continued to be published twice yearly until 1996. A name change to Profile and an increase to quarterly publication occurred in 1997; circulation remained restricted to members. The Publications Committee in 1968 determined the Publication Series would be the medium for occasional technical papers, reviews and reports but not research papers and in 1962 the Australian Journal of Soil Research was established by CSIRO in response to continued representations from the Society. By 1960 a draft constitution was circulated to, and adopted by members. The first honorary life membership of the Society was awarded to Dr. J A Prescott. Honorary memberships are still awarded for service to the Society and to soil science and are capped at 25. In 1964 the ISSS awarded honorary membership to Dr. Prescott. Now known as IUSS Honorary members other Australians recognised have been EG Hallsworth (1990,) J Quirk (1998) and RE White (2012). In 1989 a motion was narrowly defeated to introduce a Fellows category of membership to the Society. In 2012 members attending the annual general meeting of the Society discussed the introduction of a Fellow category as an Award and members voted to continue discussion on this initiative. Federal Council initiated a Student Award in 1969 and over ensuing years a range of awards were initiated; JA Prescott Medal of Soil Science (1972), Australian Society of Soil Science Inc Publication Medal (1979), JK Taylor OBE Gold Medal in Soil Science (1984), CG Stephens PhD Award in Soil Science (2003), LJH Teakle Award (2010) along with the Society's conference presentation awards. Branches were busy during this time and hosted many activities including seminars, field trips and conferences for both members and those interested in soil science. By the early seventies several branches had conducted refresher courses and in 1974 the Society became incorporated. The Society hosted its first world congress, the 9th International Society of Soil Science Congress, in Adelaide in 1968 with 310 papers printed, 239 papers presented and 720 delegates. In contrast, 42 years later the 19th World Congress of Soil Science returned to Australia, where in Brisbane 1914 delegates from 68 countries were treated to 343 presentations, 1227 research posters, 8 keynote and 65 invited lead speakers. A commemorative stamp was produced for the first Congress and another stamp was created in 2007 to celebrate the 50th anniversary of the Society. Originally Society conferences were held every four years however this was reduced to, and still remains, at two year intervals. An inaugural joint conference of the New Zealand Society of Soil Science and the Australian Society of Soil Science Inc. was held in Rotorua in November 1986. This paved the way for the series of joint national conferences between the two societies, the first one of which was held in Melbourne in 1996, and which have been held subsequently every four years. A society logo was introduced for the national soil conference in 1984 and a competition was subsequently held to design a logo for the Society. The winning design was launched in 1986, replaced in 2006 and the rebranding of the Society continued into 2011 when the business name Soil Science Australia was adopted by the Society as the 'public name' of the organisation. Over the years the Society was approached to support a range of organisations. It was a founding member of the Australian GeoScience Council in 1982. In general the Society has maintained its focus on soil and limited its associations to kindred organisations. Technology has driven many of the recent changes in the Society. In 1996 the first web site was developed, housed on the University of Melbourne domain. The Society newsletter ceased to be printed on paper in 2002 and delivery to members was via email. Subscription notices are no longer issued and online collection of subscriptions due is via the internet. The administration of the Society was moved to a centralized office run by the Australian Institute of Agricultural Science in 1996 and whilst the Federal Council Executive continues to rotate across the branches of Australia the administration found a permanent home for the first time. In 1998 the first Executive Officer was appointed, whose role includes the administration of the Society. In 2010 the Governor of Queensland, Her Excellency Ms Penelope Wensley AC Governor of Queensland accepted the invitation to become the first Patron of the Society. A significant decision taken in 1996 to introduce the Certified Professional Soil Scientist (CPSS) accreditation program has seen the program burgeon primarily due to the increasing demand by Government authorities for certified professionals in soil and land management. Accreditation is only available to members with requirements for accreditation listed in the Standards for Professionals in Soil Science. Finally, in recognition of the declining number of soil science graduates the decision was made in 2005 to allow anyone interested in soil science to apply for membership of the Society. This has been a key contributor to the continued growth of the Society along with efforts by the Society to engage the general public via initiatives such as the Australian soils calendar and Soil Science in Australia magazine.
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
Hong, Sehee; Kim, Soyoung
2018-01-01
There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.
[Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].
Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping
2014-06-01
The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.
1992-12-01
suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model
Experience with turbulence interaction and turbulence-chemistry models at Fluent Inc.
NASA Technical Reports Server (NTRS)
Choudhury, D.; Kim, S. E.; Tselepidakis, D. P.; Missaghi, M.
1995-01-01
This viewgraph presentation discusses (1) turbulence modeling: challenges in turbulence modeling, desirable attributes of turbulence models, turbulence models in FLUENT, and examples using FLUENT; and (2) combustion modeling: turbulence-chemistry interaction and FLUENT equilibrium model. As of now, three turbulence models are provided: the conventional k-epsilon model, the renormalization group model, and the Reynolds-stress model. The renormalization group k-epsilon model has broadened the range of applicability of two-equation turbulence models. The Reynolds-stress model has proved useful for strongly anisotropic flows such as those encountered in cyclones, swirlers, and combustors. Issues remain, such as near-wall closure, with all classes of models.
ERIC Educational Resources Information Center
Freeman, Thomas J.
This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…
SUMMA and Model Mimicry: Understanding Differences Among Land Models
NASA Astrophysics Data System (ADS)
Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.
2016-12-01
Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.
Seven Modeling Perspectives on Teaching and Learning: Some Interrelations and Cognitive Effects
ERIC Educational Resources Information Center
Easley, J. A., Jr.
1977-01-01
The categories of models associated with the seven perspectives are designated as combinatorial models, sampling models, cybernetic models, game models, critical thinking models, ordinary language analysis models, and dynamic structural models. (DAG)
NASA Astrophysics Data System (ADS)
Clark, Martyn; Essery, Richard
2017-04-01
When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
ERIC Educational Resources Information Center
Thelen, Mark H.; And Others
1977-01-01
Assesses the influence of model consequences on perceived model affect and, conversely, assesses the influence of model affect on perceived model consequences. Also appraises the influence of model consequences and model affect on perceived model attractiveness, perceived model competence, and perceived task attractiveness. (Author/RK)
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.
2015-12-01
Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.
NASA Astrophysics Data System (ADS)
Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.
2014-12-01
Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.
Literature review of models on tire-pavement interaction noise
NASA Astrophysics Data System (ADS)
Li, Tan; Burdisso, Ricardo; Sandu, Corina
2018-04-01
Tire-pavement interaction noise (TPIN) becomes dominant at speeds above 40 km/h for passenger vehicles and 70 km/h for trucks. Several models have been developed to describe and predict the TPIN. However, these models do not fully reveal the physical mechanisms or predict TPIN accurately. It is well known that all the models have both strengths and weaknesses, and different models fit different investigation purposes or conditions. The numerous papers that present these models are widely scattered among thousands of journals, and it is difficult to get the complete picture of the status of research in this area. This review article aims at presenting the history and current state of TPIN models systematically, making it easier to identify and distribute the key knowledge and opinions, and providing insight into the future research trend in this field. In this work, over 2000 references related to TPIN were collected, and 74 models were reviewed from nearly 200 selected references; these were categorized into deterministic models (37), statistical models (18), and hybrid models (19). The sections explaining the models are self-contained with key principles, equations, and illustrations included. The deterministic models were divided into three sub-categories: conventional physics models, finite element and boundary element models, and computational fluid dynamics models; the statistical models were divided into three sub-categories: traditional regression models, principal component analysis models, and fuzzy curve-fitting models; the hybrid models were divided into three sub-categories: tire-pavement interface models, mechanism separation models, and noise propagation models. At the end of each category of models, a summary table is presented to compare these models with the key information extracted. Readers may refer to these tables to find models of their interest. The strengths and weaknesses of the models in different categories were then analyzed. Finally, the modeling trend and future direction in this area are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Expert models and modeling processes associated with a computer-modeling tool
NASA Astrophysics Data System (ADS)
Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.
2006-07-01
Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.
Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis
2017-02-01
Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Conceptual and logical level of database modeling
NASA Astrophysics Data System (ADS)
Hunka, Frantisek; Matula, Jiri
2016-06-01
Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; ...
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
NASA Astrophysics Data System (ADS)
Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian
2016-04-01
Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.
Object-oriented biomedical system modelling--the language.
Hakman, M; Groth, T
1999-11-01
The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.
ERIC Educational Resources Information Center
Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce
2011-01-01
This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
An empirical model to forecast solar wind velocity through statistical modeling
NASA Astrophysics Data System (ADS)
Gao, Y.; Ridley, A. J.
2013-12-01
The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.
A Primer for Model Selection: The Decisive Role of Model Complexity
NASA Astrophysics Data System (ADS)
Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang
2018-03-01
Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
Women's Endorsement of Models of Sexual Response: Correlates and Predictors.
Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert
2016-02-01
Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.
The Use of Modeling-Based Text to Improve Students' Modeling Competencies
ERIC Educational Resources Information Center
Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan
2015-01-01
This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Lu, Dan; Ye, Ming; Curtis, Gary P.
2015-08-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
Airborne Wireless Communication Modeling and Analysis with MATLAB
2014-03-27
research develops a physical layer model that combines antenna modeling using computational electromagnetics and the two-ray propagation model to...predict the received signal strength. The antenna is modeled with triangular patches and analyzed by extending the antenna modeling algorithm by Sergey...7 2.7. Propagation Modeling : Statistical Models ............................................................8 2.8. Antenna Modeling
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
ERIC Educational Resources Information Center
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.
Jenness, Samuel M; Goodreau, Steven M; Morris, Martina
2018-04-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks
Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina
2018-01-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
A composite computational model of liver glucose homeostasis. I. Building the composite model.
Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A
2012-04-07
A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.
NASA Technical Reports Server (NTRS)
Kral, Linda D.; Ladd, John A.; Mani, Mori
1995-01-01
The objective of this viewgraph presentation is to evaluate turbulence models for integrated aircraft components such as the forebody, wing, inlet, diffuser, nozzle, and afterbody. The one-equation models have replaced the algebraic models as the baseline turbulence models. The Spalart-Allmaras one-equation model consistently performs better than the Baldwin-Barth model, particularly in the log-layer and free shear layers. Also, the Sparlart-Allmaras model is not grid dependent like the Baldwin-Barth model. No general turbulence model exists for all engineering applications. The Spalart-Allmaras one-equation model and the Chien k-epsilon models are the preferred turbulence models. Although the two-equation models often better predict the flow field, they may take from two to five times the CPU time. Future directions are in further benchmarking the Menter blended k-w/k-epsilon and algorithmic improvements to reduce CPU time of the two-equation model.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
BioModels: expanding horizons to include more modelling approaches and formats
Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi
2018-01-01
Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614
NASA Astrophysics Data System (ADS)
Justi, Rosária S.; Gilbert, John K.
2002-04-01
In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.
Aspinall, Richard
2004-08-01
This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed based on hypothesised relationships based on consideration of underlying and proximate causes of change, and (iv) that models are not invariant over time.
NASA Astrophysics Data System (ADS)
Aktan, Mustafa B.
The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of science teaching and teacher education programs are discussed. Directions for future research are suggested. Overall, based on the results, I suggest that prospective science teachers should engage in more modeling activities through their preparation programs, gain more modeling experience, and collaborate with their colleagues to better understand and implement scientific models in science teaching.
Validation of Groundwater Models: Meaningful or Meaningless?
NASA Astrophysics Data System (ADS)
Konikow, L. F.
2003-12-01
Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.
Royle, J. Andrew; Dorazio, Robert M.
2008-01-01
A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.
Using the Model Coupling Toolkit to couple earth system models
Warner, J.C.; Perlin, N.; Skyllingstad, E.D.
2008-01-01
Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Premium analysis for copula model: A case study for Malaysian motor insurance claims
NASA Astrophysics Data System (ADS)
Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah
2014-06-01
This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.
2006-03-01
models, the thesis applies a biological model, the Lotka - Volterra predator- prey model, to a highly suggestive case study, that of the Irish Republican...Model, Irish Republican Army, Sinn Féin, Lotka - Volterra Predator Prey Model, Recruitment, British Army 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...weaknesses of sociological and biological models, the thesis applies a biological model, the Lotka - Volterra predator-prey model, to a highly suggestive
Right-Sizing Statistical Models for Longitudinal Data
Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.
2015-01-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Examination of various turbulence models for application in liquid rocket thrust chambers
NASA Technical Reports Server (NTRS)
Hung, R. J.
1991-01-01
There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.
Lv, Yan; Yan, Bin; Wang, Lin; Lou, Dong-hua
2012-04-01
To analyze the reliability of the dento-maxillary models created by cone-beam CT and rapid prototyping (RP). Plaster models were obtained from 20 orthodontic patients who had been scanned by cone-beam CT and 3-D models were formed after the calculation and reconstruction of software. Then, computerized composite models (RP models) were produced by rapid prototyping technique. The crown widths, dental arch widths and dental arch lengths on each plaster model, 3-D model and RP model were measured, followed by statistical analysis with SPSS17.0 software package. For crown widths, dental arch lengths and crowding, there were significant differences(P<0.05) among the 3 models, but the dental arch widths were on the contrary. Measurements on 3-D models were significantly smaller than those on other two models(P<0.05). Compared with 3-D models, RP models had more numbers which were not significantly different from those on plaster models(P>0.05). The regression coefficient among three models were significantly different(P<0.01), ranging from 0.8 to 0.9. But between RP and plaster models was bigger than that between 3-D and plaster models. There is high consistency within 3 models, while some differences were accepted in clinic. Therefore, it is possible to substitute 3-D and RP models for plaster models in order to save storage space and improve efficiency.
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2013-12-01
Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Dungan, Jennifer L.
1997-01-01
In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
Bayesian Data-Model Fit Assessment for Structural Equation Modeling
ERIC Educational Resources Information Center
Levy, Roy
2011-01-01
Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…
Evolution of computational models in BioModels Database and the Physiome Model Repository.
Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar
2018-04-12
A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.
NASA Astrophysics Data System (ADS)
Li, J.
2017-12-01
Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.
Computational Models for Calcium-Mediated Astrocyte Functions.
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes.
Computational Models for Calcium-Mediated Astrocyte Functions
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes. PMID:29670517
Breuer, L.; Huisman, J.A.; Willems, P.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.
2009-01-01
This paper introduces the project on 'Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM)' that aims at investigating the envelope of predictions on changes in hydrological fluxes due to land use change. As part of a series of four papers, this paper outlines the motivation and setup of LUCHEM, and presents a model intercomparison for the present-day simulation results. Such an intercomparison provides a valuable basis to investigate the effects of different model structures on model predictions and paves the ground for the analysis of the performance of multi-model ensembles and the reliability of the scenario predictions in companion papers. In this study, we applied a set of 10 lumped, semi-lumped and fully distributed hydrological models that have been previously used in land use change studies to the low mountainous Dill catchment, Germany. Substantial differences in model performance were observed with Nash-Sutcliffe efficiencies ranging from 0.53 to 0.92. Differences in model performance were attributed to (1) model input data, (2) model calibration and (3) the physical basis of the models. The models were applied with two sets of input data: an original and a homogenized data set. This homogenization of precipitation, temperature and leaf area index was performed to reduce the variation between the models. Homogenization improved the comparability of model simulations and resulted in a reduced average bias, although some variation in model data input remained. The effect of the physical differences between models on the long-term water balance was mainly attributed to differences in how models represent evapotranspiration. Semi-lumped and lumped conceptual models slightly outperformed the fully distributed and physically based models. This was attributed to the automatic model calibration typically used for this type of models. Overall, however, we conclude that there was no superior model if several measures of model performance are considered and that all models are suitable to participate in further multi-model ensemble set-ups and land use change scenario investigations. ?? 2008 Elsevier Ltd. All rights reserved.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Modeling uncertainty: quicksand for water temperature modeling
Bartholow, John M.
2003-01-01
Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.
Energy modeling. Volume 2: Inventory and details of state energy models
NASA Astrophysics Data System (ADS)
Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.
1981-05-01
An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.
NASA Astrophysics Data System (ADS)
Peckham, Scott
2016-04-01
Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders, time interpolators and unit converters) as necessary to mediate the differences between them so they can work together. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model or data set to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. Recent efforts to bring powerful uncertainty analysis and inverse modeling toolkits such as DAKOTA into modeling frameworks will also be described. This talk will conclude with an overview of several related modeling projects that have been funded by NSF's EarthCube initiative, namely the Earth System Bridge, OntoSoft and GeoSemantics projects.
[A review on research of land surface water and heat fluxes].
Sun, Rui; Liu, Changming
2003-03-01
Many field experiments were done, and soil-vegetation-atmosphere transfer(SVAT) models were stablished to estimate land surface heat fluxes. In this paper, the processes of experimental research on land surface water and heat fluxes are reviewed, and three kinds of SVAT model(single layer model, two layer model and multi-layer model) are analyzed. Remote sensing data are widely used to estimate land surface heat fluxes. Based on remote sensing and energy balance equation, different models such as simplified model, single layer model, extra resistance model, crop water stress index model and two source resistance model are developed to estimate land surface heat fluxes and evapotranspiration. These models are also analyzed in this paper.
Examination of simplified travel demand model. [Internal volume forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.L. Jr.; McFarlane, W.J.
1978-01-01
A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less
MPTinR: analysis of multinomial processing tree models in R.
Singmann, Henrik; Kellen, David
2013-06-01
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Understanding and Predicting Urban Propagation Losses
2009-09-01
6. Extended Hata Model ..........................22 7. Modified Hata Model ..........................22 8. Walfisch – Ikegami Model...39 4. COST (Extended) Hata Model ...................40 5. Modified Hata Model ..........................41 6. Walfisch- Ikegami Model...47 1. Scenario One – Walfisch- Ikegami Model ........51 2. Scenario Two – Modified Hata Model ...........52 3. Scenario Three – Urban Hata
A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016
A framework for sharing and integrating remote sensing and GIS models based on Web service.
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Timmermans, Harry
2011-06-01
Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.
The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.
Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e,more » MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.« less
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
Comparison of dark energy models after Planck 2015
NASA Astrophysics Data System (ADS)
Xu, Yue-Yao; Zhang, Xin
2016-11-01
We make a comparison for ten typical, popular dark energy models according to their capabilities of fitting the current observational data. The observational data we use in this work include the JLA sample of type Ia supernovae observation, the Planck 2015 distance priors of cosmic microwave background observation, the baryon acoustic oscillations measurements, and the direct measurement of the Hubble constant. Since the models have different numbers of parameters, in order to make a fair comparison, we employ the Akaike and Bayesian information criteria to assess the worth of the models. The analysis results show that, according to the capability of explaining observations, the cosmological constant model is still the best one among all the dark energy models. The generalized Chaplygin gas model, the constant w model, and the α dark energy model are worse than the cosmological constant model, but still are good models compared to others. The holographic dark energy model, the new generalized Chaplygin gas model, and the Chevalliear-Polarski-Linder model can still fit the current observations well, but from an economically feasible perspective, they are not so good. The new agegraphic dark energy model, the Dvali-Gabadadze-Porrati model, and the Ricci dark energy model are excluded by the current observations.
Parametric regression model for survival data: Weibull regression model as an example
2016-01-01
Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846
Inner Magnetosphere Modeling at the CCMC: Ring Current, Radiation Belt and Magnetic Field Mapping
NASA Astrophysics Data System (ADS)
Rastaetter, L.; Mendoza, A. M.; Chulaki, A.; Kuznetsova, M. M.; Zheng, Y.
2013-12-01
Modeling of the inner magnetosphere has entered center stage with the launch of the Van Allen Probes (RBSP) in 2012. The Community Coordinated Modeling Center (CCMC) has drastically improved its offerings of inner magnetosphere models that cover energetic particles in the Earth's ring current and radiation belts. Models added to the CCMC include the stand-alone Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model by M.C. Fok, the Rice Convection Model (RCM) by R. Wolf and S. Sazykin and numerous versions of the Tsyganenko magnetic field model (T89, T96, T01quiet, TS05). These models join the LANL* model by Y. Yu hat was offered for instant run earlier in the year. In addition to these stand-alone models, the Comprehensive Ring Current Model (CRCM) by M.C. Fok and N. Buzulukova joined as a component of the Space Weather Modeling Framework (SWMF) in the magnetosphere model run-on-request category. We present modeling results of the ring current and radiation belt models and demonstrate tracking of satellites such as RBSP. Calculations using the magnetic field models include mappings to the magnetic equator or to minimum-B positions and the determination of foot points in the ionosphere.
Kim, Steven B; Kodell, Ralph L; Moon, Hojin
2014-03-01
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.
Joe H. Scott; Robert E. Burgan
2005-01-01
This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.
Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing
2017-06-18
The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.
Implementation of Dryden Continuous Turbulence Model into Simulink for LSA-02 Flight Test Simulation
NASA Astrophysics Data System (ADS)
Ichwanul Hakim, Teuku Mohd; Arifianto, Ony
2018-04-01
Turbulence is a movement of air on small scale in the atmosphere that caused by instabilities of pressure and temperature distribution. Turbulence model is integrated into flight mechanical model as an atmospheric disturbance. Common turbulence model used in flight mechanical model are Dryden and Von Karman model. In this minor research, only Dryden continuous turbulence model were made. Dryden continuous turbulence model has been implemented, it refers to the military specification MIL-HDBK-1797. The model was implemented into Matlab Simulink. The model will be integrated with flight mechanical model to observe response of the aircraft when it is flight through turbulence field. The turbulence model is characterized by multiplying the filter which are generated from power spectral density with band-limited Gaussian white noise input. In order to ensure that the model provide a good result, model verification has been done by comparing the implemented model with the similar model that is provided in aerospace blockset. The result shows that there are some difference for 2 linear velocities (vg and wg), and 3 angular rate (pg, qg and rg). The difference is instantly caused by different determination of turbulence scale length which is used in aerospace blockset. With the adjustment of turbulence length in the implemented model, both model result the similar output.
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.
2017-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.
Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R
2016-07-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
NASA Technical Reports Server (NTRS)
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.;
2016-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...
2016-08-22
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theurich, Gerhard; DeLuca, C.; Campbell, T.
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
An ontology for component-based models of water resource systems
NASA Astrophysics Data System (ADS)
Elag, Mostafa; Goodall, Jonathan L.
2013-08-01
Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.
Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah
2018-07-01
In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Exploring Several Methods of Groundwater Model Selection
NASA Astrophysics Data System (ADS)
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Models Archive and ModelWeb at NSSDC
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; King, J. H.
2002-05-01
In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.
NASA Astrophysics Data System (ADS)
Knoben, Wouter; Woods, Ross; Freer, Jim
2016-04-01
Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
Brewer, Shannon K.; Worthington, Thomas; Mollenhauer, Robert; Stewart, David; McManamay, Ryan; Guertault, Lucie; Moore, Desiree
2018-01-01
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio‐economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models, 43 were commonly applied due to their versatility, accessibility, user‐friendliness, and excellent user‐support. Forty‐one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user‐support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user‐friendly forms, increasing user‐support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Nonetheless, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.
Hedenstierna, Sofia; Halldin, Peter
2008-04-15
A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert; ...
2018-04-06
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
Replicating Health Economic Models: Firm Foundations or a House of Cards?
Bermejo, Inigo; Tappenden, Paul; Youn, Ji-Hee
2017-11-01
Health economic evaluation is a framework for the comparative analysis of the incremental health gains and costs associated with competing decision alternatives. The process of developing health economic models is usually complex, financially expensive and time-consuming. For these reasons, model development is sometimes based on previous model-based analyses; this endeavour is usually referred to as model replication. Such model replication activity may involve the comprehensive reproduction of an existing model or 'borrowing' all or part of a previously developed model structure. Generally speaking, the replication of an existing model may require substantially less effort than developing a new de novo model by bypassing, or undertaking in only a perfunctory manner, certain aspects of model development such as the development of a complete conceptual model and/or comprehensive literature searching for model parameters. A further motivation for model replication may be to draw on the credibility or prestige of previous analyses that have been published and/or used to inform decision making. The acceptability and appropriateness of replicating models depends on the decision-making context: there exists a trade-off between the 'savings' afforded by model replication and the potential 'costs' associated with reduced model credibility due to the omission of certain stages of model development. This paper provides an overview of the different levels of, and motivations for, replicating health economic models, and discusses the advantages, disadvantages and caveats associated with this type of modelling activity. Irrespective of whether replicated models should be considered appropriate or not, complete replicability is generally accepted as a desirable property of health economic models, as reflected in critical appraisal checklists and good practice guidelines. To this end, the feasibility of comprehensive model replication is explored empirically across a small number of recent case studies. Recommendations are put forward for improving reporting standards to enhance comprehensive model replicability.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
NASA Astrophysics Data System (ADS)
Oursland, Mark David
This study compared the modeling achievement of students receiving mathematical modeling instruction using the computer microworld, Interactive Physics, and students receiving instruction using physical objects. Modeling instruction included activities where students applied the (a) linear model to a variety of situations, (b) linear model to two-rate situations with a constant rate, (c) quadratic model to familiar geometric figures. Both quantitative and qualitative methods were used to analyze achievement differences between students (a) receiving different methods of modeling instruction, (b) with different levels of beginning modeling ability, or (c) with different levels of computer literacy. Student achievement was analyzed quantitatively through a three-factor analysis of variance where modeling instruction, beginning modeling ability, and computer literacy were used as the three independent factors. The SOLO (Structure of the Observed Learning Outcome) assessment framework was used to design written modeling assessment instruments to measure the students' modeling achievement. The same three independent factors were used to collect and analyze the interviews and observations of student behaviors. Both methods of modeling instruction used the data analysis approach to mathematical modeling. The instructional lessons presented problem situations where students were asked to collect data, analyze the data, write a symbolic mathematical equation, and use equation to solve the problem. The researcher recommends the following practice for modeling instruction based on the conclusions of this study. A variety of activities with a common structure are needed to make explicit the modeling process of applying a standard mathematical model. The modeling process is influenced strongly by prior knowledge of the problem context and previous modeling experiences. The conclusions of this study imply that knowledge of the properties about squares improved the students' ability to model a geometric problem more than instruction in data analysis modeling. The uses of computer microworlds such as Interactive Physics in conjunction with cooperative groups are a viable method of modeling instruction.
A physical data model for fields and agents
NASA Astrophysics Data System (ADS)
de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek
2016-04-01
Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Modeling Information Accumulation in Psychological Tests Using Item Response Times
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2015-01-01
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Climate and atmospheric modeling studies
NASA Technical Reports Server (NTRS)
1992-01-01
The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.
Models in Science Education: Applications of Models in Learning and Teaching Science
ERIC Educational Resources Information Center
Ornek, Funda
2008-01-01
In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…
Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook
NASA Technical Reports Server (NTRS)
Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.
1986-01-01
The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.
Vector models and generalized SYK models
Peng, Cheng
2017-05-23
Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.
Validation of the PVSyst Performance Model for the Concentrix CPV Technology
NASA Astrophysics Data System (ADS)
Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault
2011-12-01
The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.
Comparative Protein Structure Modeling Using MODELLER
Webb, Benjamin; Sali, Andrej
2016-01-01
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406
A comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh
1993-01-01
A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.
2017-07-01
The diversity in hydrologic models has historically led to great controversy on the correct
approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
NASA Astrophysics Data System (ADS)
Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.
2017-12-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.
Analysis of terahertz dielectric properties of pork tissue
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Xie, Qiaoling; Sun, Ping
2017-10-01
Seeing that about 70% component of fresh biological tissues is water, many scientists try to use water models to describe the dielectric properties of biological tissues. The classical water dielectric models are Debye model, Double Debye model and Cole-Cole model. This work aims to determine a suitable model by comparing three models above with experimental data. These models are applied to fresh pork tissue. By means of least square method, the parameters of different models are fitted with the experimental data. Comparing different models on both dielectric function, the Cole-Cole model is verified the best to describe the experiments of pork tissue. The correction factor α of the Cole-Cole model is an important modification for biological tissues. So Cole-Cole model is supposed to be a priority selection to describe the dielectric properties for biological tissues in the terahertz range.
Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model
NASA Astrophysics Data System (ADS)
Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus
2017-12-01
The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2017-06-01
The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The necessity of using such models may change the nature of mathematical modeling in science and, thus, the nature of science, as it happened in the case of Q-models, which not only led to a revolutionary transformation of physics but also opened new possibilities for scientific thinking and mathematical modeling beyond physics.
Vertically-Integrated Dual-Continuum Models for CO2 Injection in Fractured Aquifers
NASA Astrophysics Data System (ADS)
Tao, Y.; Guo, B.; Bandilla, K.; Celia, M. A.
2017-12-01
Injection of CO2 into a saline aquifer leads to a two-phase flow system, with supercritical CO2 and brine being the two fluid phases. Various modeling approaches, including fully three-dimensional (3D) models and vertical-equilibrium (VE) models, have been used to study the system. Almost all of that work has focused on unfractured formations. 3D models solve the governing equations in three dimensions and are applicable to generic geological formations. VE models assume rapid and complete buoyant segregation of the two fluid phases, resulting in vertical pressure equilibrium and allowing integration of the governing equations in the vertical dimension. This reduction in dimensionality makes VE models computationally more efficient, but the associated assumptions restrict the applicability of VE model to formations with moderate to high permeability. In this presentation, we extend the VE and 3D models for CO2 injection in fractured aquifers. This is done in the context of dual-continuum modeling, where the fractured formation is modeled as an overlap of two continuous domains, one representing the fractures and the other representing the rock matrix. Both domains are treated as porous media continua and can be modeled by either a VE or a 3D formulation. The transfer of fluid mass between rock matrix and fractures is represented by a mass transfer function connecting the two domains. We have developed a computational model that combines the VE and 3D models, where we use the VE model in the fractures, which typically have high permeability, and the 3D model in the less permeable rock matrix. A new mass transfer function is derived, which couples the VE and 3D models. The coupled VE-3D model can simulate CO2 injection and migration in fractured aquifers. Results from this model compare well with a full-3D model in which both the fractures and rock matrix are modeled with 3D models, with the hybrid VE-3D model having significantly reduced computational cost. In addition to the VE-3D model, we explore simplifications of the rock matrix domain by using sugar-cube and matchstick conceptualizations and develop VE-dual porosity and VE-matchstick models. These vertically-integrated dual-permeability and dual-porosity models provide a range of computationally efficient tools to model CO2 storage in fractured saline aquifers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. Harrington
2004-10-25
The purpose of this model report is to provide documentation of the conceptual and mathematical model (Ashplume) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. These aspects of volcanism-related dose calculation are described in the context of the entire igneous disruptive events conceptual model in ''Characterize Framework for Igneous Activity'' (BSC 2004 [DIRS 169989], Section 6.1.1). The Ashplume conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through themore » Yucca Mountain repository and downwind transport of contaminated tephra. The Ashplume mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the ground surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report update the previous documentation of the Ashplume mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model. In this report, ''Ashplume'' is used when referring to the atmospheric dispersal model and ''ASHPLUME'' is used when referencing the code of that model. Two analysis and model reports provide direct inputs to this model report, namely ''Characterize Eruptive Processes at Yucca Mountain, Nevada and Number of Waste Packages Hit by Igneous Intrusion''. This model report provides direct inputs to the TSPA, which uses the ASHPLUME software described and used in this model report. Thus, ASHPLUME software inputs are inputs to this model report for ASHPLUME runs in this model report. However, ASHPLUME software inputs are outputs of this model report for ASHPLUME runs by TSPA.« less
Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.
Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong
2007-09-01
Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Huang, Ming Xia; Wang, Jing; Tang, Jian Zhao; Yu, Qiang; Zhang, Jun; Xue, Qing Yu; Chang, Qing; Tan, Mei Xiu
2016-11-18
The suitability of four popular empirical and semi-empirical stomatal conductance models (Jarvis model, Ball-Berry model, Leuning model and Medlyn model) was evaluated based on para-llel observation data of leaf stomatal conductance, leaf net photosynthetic rate and meteorological factors during the vigorous growing period of potato and oil sunflower at Wuchuan experimental station in agro-pastoral ecotone in North China. It was found that there was a significant linear relationship between leaf stomatal conductance and leaf net photosynthetic rate for potato, whereas the linear relationship appeared weaker for oil sunflower. The results of model evaluation showed that Ball-Berry model performed best in simulating leaf stomatal conductance of potato, followed by Leuning model and Medlyn model, while Jarvis model was the last in the performance rating. The root-mean-square error (RMSE) was 0.0331, 0.0371, 0.0456 and 0.0794 mol·m -2 ·s -1 , the normalized root-mean-square error (NRMSE) was 26.8%, 30.0%, 36.9% and 64.3%, and R-squared (R 2 ) was 0.96, 0.61, 0.91 and 0.88 between simulated and observed leaf stomatal conductance of potato for Ball-Berry model, Leuning model, Medlyn model and Jarvis model, respectively. For leaf stomatal conductance of oil sunflower, Jarvis model performed slightly better than Leuning model, Ball-Berry model and Medlyn model. RMSE was 0.2221, 0.2534, 0.2547 and 0.2758 mol·m -2 ·s -1 , NRMSE was 40.3%, 46.0%, 46.2% and 50.1%, and R 2 was 0.38, 0.22, 0.23 and 0.20 between simulated and observed leaf stomatal conductance of oil sunflower for Jarvis model, Leuning model, Ball-Berry model and Medlyn model, respectively. The path analysis was conducted to identify effects of specific meteorological factors on leaf stomatal conductance. The diurnal variation of leaf stomatal conductance was principally affected by vapour pressure saturation deficit for both potato and oil sunflower. The model evaluation suggested that the stomatal conductance models for oil sunflower are to be improved in further research.
Evaluation of chiller modeling approaches and their usability for fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are themore » Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
PyMT: A Python package for model-coupling in the Earth sciences
NASA Astrophysics Data System (ADS)
Hutton, E.
2016-12-01
The current landscape of Earth-system models is not only broad in scientific scope, but also broad in type. On the one hand, the large variety of models is exciting, as it provides fertile ground for extending or linking models together in novel ways to answer new scientific questions. However, the heterogeneity in model type acts to inhibit model coupling, model development, or even model use. Existing models are written in a variety of programming languages, operate on different grids, use their own file formats (both for input and output), have different user interfaces, have their own time steps, etc. Each of these factors become obstructions to scientists wanting to couple, extend - or simply run - existing models. For scientists whose main focus may not be computer science these barriers become even larger and become significant logistical hurdles. And this is all before the scientific difficulties of coupling or running models are addressed. The CSDMS Python Modeling Toolkit (PyMT) was developed to help non-computer scientists deal with these sorts of modeling logistics. PyMT is the fundamental package the Community Surface Dynamics Modeling System uses for the coupling of models that expose the Basic Modeling Interface (BMI). It contains: Tools necessary for coupling models of disparate time and space scales (including grid mappers) Time-steppers that coordinate the sequencing of coupled models Exchange of data between BMI-enabled models Wrappers that automatically load BMI-enabled models into the PyMT framework Utilities that support open-source interfaces (UGRID, SGRID,CSDMS Standard Names, etc.) A collection of community-submitted models, written in a variety of programminglanguages, from a variety of process domains - but all usable from within the Python programming language A plug-in framework for adding additional BMI-enabled models to the framework In this presentation we intoduce the basics of the PyMT as well as provide an example of coupling models of different domains and grid types.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2017-04-01
Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.
Downscaling GISS ModelE Boreal Summer Climate over Africa
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.; Fulakeza, Matthew
2015-01-01
The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.
A tool for multi-scale modelling of the renal nephron
Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.
2011-01-01
We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210
An online model composition tool for system biology models
2013-01-01
Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914
A parsimonious dynamic model for river water quality assessment.
Mannina, Giorgio; Viviani, Gaspare
2010-01-01
Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.
The cost of simplifying air travel when modeling disease spread.
Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V
2009-01-01
Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. A. Wasiolek
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P
2011-01-01
To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Marín, Laura; Torrejón, Antonio; Oltra, Lorena; Seoane, Montserrat; Hernández-Sampelayo, Paloma; Vera, María Isabel; Casellas, Francesc; Alfaro, Noelia; Lázaro, Pablo; García-Sánchez, Valle
2011-06-01
Nurses play an important role in the multidisciplinary management of inflammatory bowel disease (IBD), but little is known about this role and the associated resources. To improve knowledge of resource availability for health care activities and the different organizational models in managing IBD in Spain. Cross-sectional study with data obtained by questionnaire directed at Spanish Gastroenterology Services (GS). Five GS models were identified according to whether they have: no specific service for IBD management (Model A); IBD outpatient office for physician consultations (Model B); general outpatient office for nurse consultations (Model C); both, Model B and Model C (Model D); and IBD Unit (Model E) when the hospital has a Comprehensive Care Unit for IBD with telephone helpline, computer, including a Model B. Available resources and activities performed were compared according to GS model (chi-square test and test for linear trend). Responses were received from 107 GS: 33 Model A (31%), 38 Model B (36%), 4 Model C (4%), 16 Model D (15%) and 16 Model E (15%). The model in which nurses have the most resources and responsibilities is the Model E. The more complete the organizational model, the more frequent the availability of nursing resources (educational material, databases, office, and specialized software) and responsibilities (management of walk-in appointments, provision of emotional support, health education, follow-up of drug treatment and treatment adherence) (p<0.05). Nurses have more resources and responsibilities the more complete is the organizational model for IBD management. Development of these areas may improve patient outcomes. Copyright © 2011 European Crohn's and Colitis Organisation. Published by Elsevier B.V. All rights reserved.
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.
2013-01-01
Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.
Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.
2008-01-01
The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.
Johnson, Leigh F; Geffen, Nathan
2016-03-01
Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.
NASA Astrophysics Data System (ADS)
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST
Winston, Richard B.
2009-01-01
ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.
Transient PVT measurements and model predictions for vessel heat transfer. Part II.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.
2010-07-01
Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models inmore » which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.« less
Comparison of chiller models for use in model-based fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya; Haves, Philip
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Geospace environment modeling 2008--2009 challenge: Dst index
Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.
2013-01-01
This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.
Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity
NASA Astrophysics Data System (ADS)
Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján
2017-06-01
It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Comparison of childbirth care models in public hospitals, Brazil.
Vogt, Sibylle Emilie; Silva, Kátia Silveira da; Dias, Marcos Augusto Bastos
2014-04-01
To compare collaborative and traditional childbirth care models. Cross-sectional study with 655 primiparous women in four public health system hospitals in Belo Horizonte, MG, Southeastern Brazil, in 2011 (333 women for the collaborative model and 322 for the traditional model, including those with induced or premature labor). Data were collected using interviews and medical records. The Chi-square test was used to compare the outcomes and multivariate logistic regression to determine the association between the model and the interventions used. Paid work and schooling showed significant differences in distribution between the models. Oxytocin (50.2% collaborative model and 65.5% traditional model; p < 0.001), amniotomy (54.3% collaborative model and 65.9% traditional model; p = 0.012) and episiotomy (collaborative model 16.1% and traditional model 85.2%; p < 0.001) were less used in the collaborative model with increased application of non-pharmacological pain relief (85.0% collaborative model and 78.9% traditional model; p = 0.042). The association between the collaborative model and the reduction in the use of oxytocin, artificial rupture of membranes and episiotomy remained after adjustment for confounding. The care model was not associated with complications in newborns or mothers neither with the use of spinal or epidural analgesia. The results suggest that collaborative model may reduce interventions performed in labor care with similar perinatal outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
Modeling approaches in avian conservation and the role of field biologists
Beissinger, Steven R.; Walters, J.R.; Catanzaro, D.G.; Smith, Kimberly G.; Dunning, J.B.; Haig, Susan M.; Noon, Barry; Stith, Bradley M.
2006-01-01
This review grew out of our realization that models play an increasingly important role in conservation but are rarely used in the research of most avian biologists. Modelers are creating models that are more complex and mechanistic and that can incorporate more of the knowledge acquired by field biologists. Such models require field biologists to provide more specific information, larger sample sizes, and sometimes new kinds of data, such as habitat-specific demography and dispersal information. Field biologists need to support model development by testing key model assumptions and validating models. The best conservation decisions will occur where cooperative interaction enables field biologists, modelers, statisticians, and managers to contribute effectively. We begin by discussing the general form of ecological models—heuristic or mechanistic, "scientific" or statistical—and then highlight the structure, strengths, weaknesses, and applications of six types of models commonly used in avian conservation: (1) deterministic single-population matrix models, (2) stochastic population viability analysis (PVA) models for single populations, (3) metapopulation models, (4) spatially explicit models, (5) genetic models, and (6) species distribution models. We end by considering their unique attributes, determining whether the assumptions that underlie the structure are valid, and testing the ability of the model to predict the future correctly.
NASA Astrophysics Data System (ADS)
Rossman, Nathan R.; Zlotnik, Vitaly A.
2013-09-01
Water resources in agriculture-dominated basins of the arid western United States are stressed due to long-term impacts from pumping. A review of 88 regional groundwater-flow modeling applications from seven intensively irrigated western states (Arizona, California, Colorado, Idaho, Kansas, Nebraska and Texas) was conducted to provide hydrogeologists, modelers, water managers, and decision makers insight about past modeling studies that will aid future model development. Groundwater models were classified into three types: resource evaluation models (39 %), which quantify water budgets and act as preliminary models intended to be updated later, or constitute re-calibrations of older models; management/planning models (55 %), used to explore and identify management plans based on the response of the groundwater system to water-development or climate scenarios, sometimes under water-use constraints; and water rights models (7 %), used to make water administration decisions based on model output and to quantify water shortages incurred by water users or climate changes. Results for 27 model characteristics are summarized by state and model type, and important comparisons and contrasts are highlighted. Consideration of modeling uncertainty and the management focus toward sustainability, adaptive management and resilience are discussed, and future modeling recommendations, in light of the reviewed models and other published works, are presented.
Roelker, Sarah A; Caruthers, Elena J; Baker, Rachel K; Pelz, Nicholas C; Chaudhari, Ajit M W; Siston, Robert A
2017-11-01
With more than 29,000 OpenSim users, several musculoskeletal models with varying levels of complexity are available to study human gait. However, how different model parameters affect estimated joint and muscle function between models is not fully understood. The purpose of this study is to determine the effects of four OpenSim models (Gait2392, Lower Limb Model 2010, Full-Body OpenSim Model, and Full Body Model 2016) on gait mechanics and estimates of muscle forces and activations. Using OpenSim 3.1 and the same experimental data for all models, six young adults were scaled in each model, gait kinematics were reproduced, and static optimization estimated muscle function. Simulated measures differed between models by up to 6.5° knee range of motion, 0.012 Nm/Nm peak knee flexion moment, 0.49 peak rectus femoris activation, and 462 N peak rectus femoris force. Differences in coordinate system definitions between models altered joint kinematics, influencing joint moments. Muscle parameter and joint moment discrepancies altered muscle activations and forces. Additional model complexity yielded greater error between experimental and simulated measures; therefore, this study suggests Gait2392 is a sufficient model for studying walking in healthy young adults. Future research is needed to determine which model(s) is best for tasks with more complex motion.
Inter-sectoral comparison of model uncertainty of climate change impacts in Africa
NASA Astrophysics Data System (ADS)
van Griensven, Ann; Vetter, Tobias; Piontek, Franzisca; Gosling, Simon N.; Kamali, Bahareh; Reinhardt, Julia; Dinkneh, Aklilu; Yang, Hong; Alemayehu, Tadesse
2016-04-01
We present the model results and their uncertainties of an inter-sectoral impact model inter-comparison initiative (ISI-MIP) for climate change impacts in Africa. The study includes results on hydrological, crop and health aspects. The impact models used ensemble inputs consisting of 20 time series of daily rainfall and temperature data obtained from 5 Global Circulation Models (GCMs) and 4 Representative concentration pathway (RCP). In this study, we analysed model uncertainty for the Regional Hydrological Models, Global Hydrological Models, Malaria models and Crop models. For the regional hydrological models, we used 2 African test cases: the Blue Nile in Eastern Africa and the Niger in Western Africa. For both basins, the main sources of uncertainty are originating from the GCM and RCPs, while the uncertainty of the regional hydrological models is relatively low. The hydrological model uncertainty becomes more important when predicting changes on low flows compared to mean or high flows. For the other sectors, the impact models have the largest share of uncertainty compared to GCM and RCP, especially for Malaria and crop modelling. The overall conclusion of the ISI-MIP is that it is strongly advised to use ensemble modeling approach for climate change impact studies throughout the whole modelling chain.
Extended behavioural modelling of FET and lattice-mismatched HEMT devices
NASA Astrophysics Data System (ADS)
Khawam, Yahya; Albasha, Lutfi
2017-07-01
This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.
The regionalization of national-scale SPARROW models for stream nutrients
Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.
Modeling of Stiffness and Strength of Bone at Nanoscale.
Abueidda, Diab W; Sabet, Fereshteh A; Jasiuk, Iwona M
2017-05-01
Two distinct geometrical models of bone at the nanoscale (collagen fibril and mineral platelets) are analyzed computationally. In the first model (model I), minerals are periodically distributed in a staggered manner in a collagen matrix while in the second model (model II), minerals form continuous layers outside the collagen fibril. Elastic modulus and strength of bone at the nanoscale, represented by these two models under longitudinal tensile loading, are studied using a finite element (FE) software abaqus. The analysis employs a traction-separation law (cohesive surface modeling) at various interfaces in the models to account for interfacial delaminations. Plane stress, plane strain, and axisymmetric versions of the two models are considered. Model II is found to have a higher stiffness than model I for all cases. For strength, the two models alternate the superiority of performance depending on the inputs and assumptions used. For model II, the axisymmetric case gives higher results than the plane stress and plane strain cases while an opposite trend is observed for model I. For axisymmetric case, model II shows greater strength and stiffness compared to model I. The collagen-mineral arrangement of bone at nanoscale forms a basic building block of bone. Thus, knowledge of its mechanical properties is of high scientific and clinical interests.
The Use of Behavior Models for Predicting Complex Operations
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2010-01-01
Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.
ERIC Educational Resources Information Center
Gerst, Elyssa H.
2017-01-01
The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…
ERIC Educational Resources Information Center
Shin, Tacksoo
2012-01-01
This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…
ERIC Educational Resources Information Center
Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.
2011-01-01
The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
A toolbox and a record for scientific model development
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.
A decision support model for investment on P2P lending platform.
Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.
A decision support model for investment on P2P lending platform
Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed
2016-08-01
This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
Documenting Models for Interoperability and Reusability ...
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration between scientific communities, since component-based modeling can integrate models from different disciplines. Integrated Environmental Modeling (IEM) systems focus on transferring information between components by capturing a conceptual site model; establishing local metadata standards for input/output of models and databases; managing data flow between models and throughout the system; facilitating quality control of data exchanges (e.g., checking units, unit conversions, transfers between software languages); warning and error handling; and coordinating sensitivity/uncertainty analyses. Although many computational software systems facilitate communication between, and execution of, components, there are no common approaches, protocols, or standards for turn-key linkages between software systems and models, especially if modifying components is not the intent. Using a standard ontology, this paper reviews how models can be described for discovery, understanding, evaluation, access, and implementation to facilitate interoperability and reusability. In the proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Mod
CSR Model Implementation from School Stakeholder Perspectives
ERIC Educational Resources Information Center
Herrmann, Suzannah
2006-01-01
Despite comprehensive school reform (CSR) model developers' best intentions to make school stakeholders adhere strictly to the implementation of model components, school stakeholders implementing CSR models inevitably make adaptations to the CSR model. Adaptations are made to CSR models because school stakeholders internalize CSR model practices…
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
[Bone remodeling and modeling/mini-modeling.
Hasegawa, Tomoka; Amizuka, Norio
Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.
An Introduction to Markov Modeling: Concepts and Uses
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Lau, Sonie (Technical Monitor)
1998-01-01
Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.
The cerebro-cerebellum: Could it be loci of forward models?
Ishikawa, Takahiro; Tomatsu, Saeka; Izawa, Jun; Kakei, Shinji
2016-03-01
It is widely accepted that the cerebellum acquires and maintain internal models for motor control. An internal model simulates mapping between a set of causes and effects. There are two candidates of cerebellar internal models, forward models and inverse models. A forward model transforms a motor command into a prediction of the sensory consequences of a movement. In contrast, an inverse model inverts the information flow of the forward model. Despite the clearly different formulations of the two internal models, it is still controversial whether the cerebro-cerebellum, the phylogenetically newer part of the cerebellum, provides inverse models or forward models for voluntary limb movements or other higher brain functions. In this article, we review physiological and morphological evidence that suggests the existence in the cerebro-cerebellum of a forward model for limb movement. We will also discuss how the characteristic input-output organization of the cerebro-cerebellum may contribute to forward models for non-motor higher brain functions. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Mechanical model development of rolling bearing-rotor systems: A review
NASA Astrophysics Data System (ADS)
Cao, Hongrui; Niu, Linkai; Xi, Songtao; Chen, Xuefeng
2018-03-01
The rolling bearing rotor (RBR) system is the kernel of many rotating machines, which affects the performance of the whole machine. Over the past decades, extensive research work has been carried out to investigate the dynamic behavior of RBR systems. However, to the best of the authors' knowledge, no comprehensive review on RBR modelling has been reported yet. To address this gap in the literature, this paper reviews and critically discusses the current progress of mechanical model development of RBR systems, and identifies future trends for research. Firstly, five kinds of rolling bearing models, i.e., the lumped-parameter model, the quasi-static model, the quasi-dynamic model, the dynamic model, and the finite element (FE) model are summarized. Then, the coupled modelling between bearing models and various rotor models including De Laval/Jeffcott rotor, rigid rotor, transfer matrix method (TMM) models and FE models are presented. Finally, the paper discusses the key challenges of previous works and provides new insights into understanding of RBR systems for their advanced future engineering applications.
NASA Astrophysics Data System (ADS)
Gouvea, Julia; Passmore, Cynthia
2017-03-01
The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.
Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread
Miller, Joel C.; Volz, Erik M.
2012-01-01
We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...
2017-07-11
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Modeling of near-wall turbulence
NASA Technical Reports Server (NTRS)
Shih, T. H.; Mansour, N. N.
1990-01-01
An improved k-epsilon model and a second order closure model is presented for low Reynolds number turbulence near a wall. For the k-epsilon model, a modified form of the eddy viscosity having correct asymptotic near wall behavior is suggested, and a model for the pressure diffusion term in the turbulent kinetic energy equation is proposed. For the second order closure model, the existing models are modified for the Reynolds stress equations to have proper near wall behavior. A dissipation rate equation for the turbulent kinetic energy is also reformulated. The proposed models satisfy realizability and will not produce unphysical behavior. Fully developed channel flows are used for model testing. The calculations are compared with direct numerical simulations. It is shown that the present models, both the k-epsilon model and the second order closure model, perform well in predicting the behavior of the near wall turbulence. Significant improvements over previous models are obtained.
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
NASA Astrophysics Data System (ADS)
Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.
2013-05-01
This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
[Model-based biofuels system analysis: a review].
Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin
2011-03-01
Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.
An Immuno-epidemiological Model of Paratuberculosis
NASA Astrophysics Data System (ADS)
Martcheva, M.
2011-11-01
The primary objective of this article is to introduce an immuno-epidemiological model of paratuberculosis (Johne's disease). To develop the immuno-epidemiological model, we first develop an immunological model and an epidemiological model. Then, we link the two models through time-since-infection structure and parameters of the epidemiological model. We use the nested approach to compose the immuno-epidemiological model. Our immunological model captures the switch between the T-cell immune response and the antibody response in Johne's disease. The epidemiological model is a time-since-infection model and captures the variability of transmission rate and the vertical transmission of the disease. We compute the immune-response-dependent epidemiological reproduction number. Our immuno-epidemiological model can be used for investigation of the impact of the immune response on the epidemiology of Johne's disease.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
FacetModeller: Software for manual creation, manipulation and analysis of 3D surface-based models
NASA Astrophysics Data System (ADS)
Lelièvre, Peter G.; Carter-McAuslan, Angela E.; Dunham, Michael W.; Jones, Drew J.; Nalepa, Mariella; Squires, Chelsea L.; Tycholiz, Cassandra J.; Vallée, Marc A.; Farquharson, Colin G.
2018-01-01
The creation of 3D models is commonplace in many disciplines. Models are often built from a collection of tessellated surfaces. To apply numerical methods to such models it is often necessary to generate a mesh of space-filling elements that conforms to the model surfaces. While there are meshing algorithms that can do so, they place restrictive requirements on the surface-based models that are rarely met by existing 3D model building software. Hence, we have developed a Java application named FacetModeller, designed for efficient manual creation, modification and analysis of 3D surface-based models destined for use in numerical modelling.
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Application of surface complexation models to anion adsorption by natural materials
USDA-ARS?s Scientific Manuscript database
Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...
Space Environments and Effects: Trapped Proton Model
NASA Technical Reports Server (NTRS)
Huston, S. L.; Kauffman, W. (Technical Monitor)
2002-01-01
An improved model of the Earth's trapped proton environment has been developed. This model, designated Trapped Proton Model version 1 (TPM-1), determines the omnidirectional flux of protons with energy between 1 and 100 MeV throughout near-Earth space. The model also incorporates a true solar cycle dependence. The model consists of several data files and computer software to read them. There are three versions of the mo'del: a FORTRAN-Callable library, a stand-alone model, and a Web-based model.
The NASA Marshall engineering thermosphere model
NASA Technical Reports Server (NTRS)
Hickey, Michael Philip
1988-01-01
Described is the NASA Marshall Engineering Thermosphere (MET) Model, which is a modified version of the MFSC/J70 Orbital Atmospheric Density Model as currently used in the J70MM program at MSFC. The modifications to the MFSC/J70 model required for the MET model are described, graphical and numerical examples of the models are included, as is a listing of the MET model computer program. Major differences between the numerical output from the MET model and the MFSC/J70 model are discussed.
Wind turbine model and loop shaping controller design
NASA Astrophysics Data System (ADS)
Gilev, Bogdan
2017-12-01
A model of a wind turbine is evaluated, consisting of: wind speed model, mechanical and electrical model of generator and tower oscillation model. Model of the whole system is linearized around of a nominal point. By using the linear model with uncertainties is synthesized a uncertain model. By using the uncertain model is developed a H∞ controller, which provide mode of stabilizing the rotor frequency and damping the tower oscillations. Finally is simulated work of nonlinear system and H∞ controller.
Simulated Students and Classroom Use of Model-Based Intelligent Tutoring
NASA Technical Reports Server (NTRS)
Koedinger, Kenneth R.
2008-01-01
Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.
Modeling for Battery Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick
2017-01-01
For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient, and is of suitable accuracy for reliable EOD prediction in a variety of operational profiles. The model can be considered an electrochemical engineering model, but unlike most such models found in the literature, certain approximations are done that allow to retain computational efficiency for online implementation of the model. Although the focus here is on Li-ion batteries, the model is quite general and can be applied to different chemistries through a change of model parameter values. Progress on model development, providing model validation results and EOD prediction results is being presented.
NASA Astrophysics Data System (ADS)
Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.
2017-08-01
Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.
A toy terrestrial carbon flow model
NASA Technical Reports Server (NTRS)
Parton, William J.; Running, Steven W.; Walker, Brian
1992-01-01
A generalized carbon flow model for the major terrestrial ecosystems of the world is reported. The model is a simplification of the Century model and the Forest-Biogeochemical model. Topics covered include plant production, decomposition and nutrient cycling, biomes, the utility of the carbon flow model for predicting carbon dynamics under global change, and possible applications to state-and-transition models and environmentally driven global vegetation models.
2010-01-01
Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024
Drift-Scale Coupled Processes (DST and THC Seepage) Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Dixon
The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Reportmore » is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC Seepage Model and is not used for calibration to measured data.« less
Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P
2018-04-01
What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.
Ecosystem Model Skill Assessment. Yes We Can!
Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S.
2016-01-01
Need to Assess the Skill of Ecosystem Models Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. Northeast US Atlantis Marine Ecosystem Model We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. Skill Assessment Is Both Possible and Advisable We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment). PMID:26731540
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.
NASA Astrophysics Data System (ADS)
Duane, G. S.; Selten, F.
2016-12-01
Different models of climate and weather commonly give projections/predictions that differ widely in their details. While averaging of model outputs almost always improves results, nonlinearity implies that further improvement can be obtained from model interaction in run time, as has already been demonstrated with toy systems of ODEs and idealized quasigeostrophic models. In the supermodeling scheme, models effectively assimilate data from one another and partially synchronize with one another. Spread among models is manifest as a spread in possible inter-model connection coefficients, so that the models effectively "agree to disagree". Here, we construct a supermodel formed from variants of the SPEEDO model, a primitive-equation atmospheric model (SPEEDY) coupled to ocean and land. A suite of atmospheric models, coupled to the same ocean and land, is chosen to represent typical differences among climate models by varying model parameters. Connections are introduced between all pairs of corresponding independent variables at synoptic-scale intervals. Strengths of the inter-atmospheric connections can be considered to represent inverse inter-model observation error. Connection strengths are adapted based on an established procedure that extends the dynamical equations of a pair of synchronizing systems to synchronize parameters as well. The procedure is applied to synchronize the suite of SPEEDO models with another SPEEDO model regarded as "truth", adapting the inter-model connections along the way. The supermodel with trained connections gives marginally lower error in all fields than any weighted combination of the separate model outputs when used in "weather-prediction mode", i.e. with constant nudging to truth. Stronger results are obtained if a supermodel is used to predict the formation of coherent structures or the frequency of such. Partially synchronized SPEEDO models give a better representation of the blocked-zonal index cycle than does a weighted average of the constituent model outputs. We have thus shown that supermodeling and the synchronization-based procedure to adapt inter-model connections give results superior to output averaging not only with highly nonlinear toy systems, but with smaller nonlinearities as occur in climate models.
Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong
2014-10-01
In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2015-12-01
Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.
Ecosystem Model Skill Assessment. Yes We Can!
Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S
2016-01-01
Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment).
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-07-01
Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-12-01
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.
2016-12-01
Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
Towards policy relevant environmental modeling: contextual validity and pragmatic models
Miles, Scott B.
2000-01-01
"What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...
Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models
ERIC Educational Resources Information Center
Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum
2011-01-01
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…
National Centers for Environmental Prediction
Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post / VISION | About EMC EMC > Mesoscale Modeling > MODELS Home Mission Models R & D Collaborators Cyclone Tracks & Verification Implementation Info FAQ Disclaimer More Info MESOSCALE MODELING SREF
Computer Models of Personality: Implications for Measurement
ERIC Educational Resources Information Center
Cranton, P. A.
1976-01-01
Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…
Uses of Computer Simulation Models in Ag-Research and Everyday Life
USDA-ARS?s Scientific Manuscript database
When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...
ERIC Educational Resources Information Center
King, Gillian; Currie, Melissa; Smith, Linda; Servais, Michelle; McDougall, Janette
2008-01-01
A framework of operating models for interdisciplinary research programs in clinical service organizations is presented, consisting of a "clinician-researcher" skill development model, a program evaluation model, a researcher-led knowledge generation model, and a knowledge conduit model. Together, these models comprise a tailored, collaborative…
Modelling Students' Visualisation of Chemical Reaction
ERIC Educational Resources Information Center
Cheng, Maurice M. W.; Gilbert, John K.
2017-01-01
This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
Planning Major Curricular Change.
ERIC Educational Resources Information Center
Kirkland, Travis P.
Decision-making and change models can take many forms. One researcher (Nordvall, 1982) has suggested five conceptual models for introducing change: a political model; a rational decision-making model; a social interaction decision model; the problem-solving method; and an adaptive/linkage model which is an amalgam of each of the other models.…
UNITED STATES METEOROLOGICAL DATA - DAILY AND HOURLY FILES TO SUPPORT PREDICTIVE EXPOSURE MODELING
ORD numerical models for pesticide exposure include a model of spray drift (AgDisp), a cropland pesticide persistence model (PRZM), a surface water exposure model (EXAMS), and a model of fish bioaccumulation (BASS). A unified climatological database for these models has been asse...
2009-12-01
Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture
Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca
2014-02-15
Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less
NASA Astrophysics Data System (ADS)
Pincus, R.; Stevens, B. B.; Forster, P.; Collins, W.; Ramaswamy, V.
2014-12-01
The Radiative Forcing Model Intercomparison Project (RFMIP): Assessment and characterization of forcing to enable feedback studies An enormous amount of attention has been paid to the diversity of responses in the CMIP and other multi-model ensembles. This diversity is normally interpreted as a distribution in climate sensitivity driven by some distribution of feedback mechanisms. Identification of these feedbacks relies on precise identification of the forcing to which each model is subject, including distinguishing true error from model diversity. The Radiative Forcing Model Intercomparison Project (RFMIP) aims to disentangle the role of forcing from model sensitivity as determinants of varying climate model response by carefully characterizing the radiative forcing to which such models are subject and by coordinating experiments in which it is specified. RFMIP consists of four activities: 1) An assessment of accuracy in flux and forcing calculations for greenhouse gases under past, present, and future climates, using off-line radiative transfer calculations in specified atmospheres with climate model parameterizations and reference models 2) Characterization and assessment of model-specific historical forcing by anthropogenic aerosols, based on coordinated diagnostic output from climate models and off-line radiative transfer calculations with reference models 3) Characterization of model-specific effective radiative forcing, including contributions of model climatology and rapid adjustments, using coordinated climate model integrations and off-line radiative transfer calculations with a single fast model 4) Assessment of climate model response to precisely-characterized radiative forcing over the historical record, including efforts to infer true historical forcing from patterns of response, by direct specification of non-greenhouse-gas forcing in a series of coordinated climate model integrations This talk discusses the rationale for RFMIP, provides an overview of the four activities, and presents preliminary motivating results.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.
Kolossa, Antonio; Kopp, Bruno
2016-01-01
The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
Clarity versus complexity: land-use modeling as a practical tool for decision-makers
Sohl, Terry L.; Claggett, Peter
2013-01-01
The last decade has seen a remarkable increase in the number of modeling tools available to examine future land-use and land-cover (LULC) change. Integrated modeling frameworks, agent-based models, cellular automata approaches, and other modeling techniques have substantially improved the representation of complex LULC systems, with each method using a different strategy to address complexity. However, despite the development of new and better modeling tools, the use of these tools is limited for actual planning, decision-making, or policy-making purposes. LULC modelers have become very adept at creating tools for modeling LULC change, but complicated models and lack of transparency limit their utility for decision-makers. The complicated nature of many LULC models also makes it impractical or even impossible to perform a rigorous analysis of modeling uncertainty. This paper provides a review of land-cover modeling approaches and the issues causes by the complicated nature of models, and provides suggestions to facilitate the increased use of LULC models by decision-makers and other stakeholders. The utility of LULC models themselves can be improved by 1) providing model code and documentation, 2) through the use of scenario frameworks to frame overall uncertainties, 3) improving methods for generalizing key LULC processes most important to stakeholders, and 4) adopting more rigorous standards for validating models and quantifying uncertainty. Communication with decision-makers and other stakeholders can be improved by increasing stakeholder participation in all stages of the modeling process, increasing the transparency of model structure and uncertainties, and developing user-friendly decision-support systems to bridge the link between LULC science and policy. By considering these options, LULC science will be better positioned to support decision-makers and increase real-world application of LULC modeling results.
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
2015-10-30
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta
2015-02-01
A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.
A toolbox and record for scientific models
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Computational science presents a host of challenges for the field of knowledge-based software design. Scientific computation models are difficult to construct. Models constructed by one scientist are easily misapplied by other scientists to problems for which they are not well-suited. Finally, models constructed by one scientist are difficult for others to modify or extend to handle new types of problems. Construction of scientific models actually involves much more than the mechanics of building a single computational model. In the course of developing a model, a scientist will often test a candidate model against experimental data or against a priori expectations. Test results often lead to revisions of the model and a consequent need for additional testing. During a single model development session, a scientist typically examines a whole series of alternative models, each using different simplifying assumptions or modeling techniques. A useful scientific software design tool must support these aspects of the model development process as well. In particular, it should propose and carry out tests of candidate models. It should analyze test results and identify models and parts of models that must be changed. It should determine what types of changes can potentially cure a given negative test result. It should organize candidate models, test data, and test results into a coherent record of the development process. Finally, it should exploit the development record for two purposes: (1) automatically determining the applicability of a scientific model to a given problem; (2) supporting revision of a scientific model to handle a new type of problem. Existing knowledge-based software design tools must be extended in order to provide these facilities.
Donnolley, Natasha R; Chambers, Georgina M; Butler-Henderson, Kerryn A; Chapman, Michael G; Sullivan, Elizabeth A
2017-08-01
Without a standard terminology to classify models of maternity care, it is problematic to compare and evaluate clinical outcomes across different models. The Maternity Care Classification System is a novel system developed in Australia to classify models of maternity care based on their characteristics and an overarching broad model descriptor (Major Model Category). This study aimed to assess the extent of variability in the defining characteristics of models of care grouped to the same Major Model Category, using the Maternity Care Classification System. All public hospital maternity services in New South Wales, Australia, were invited to complete a web-based survey classifying two local models of care using the Maternity Care Classification System. A descriptive analysis of the variation in 15 attributes of models of care was conducted to evaluate the level of heterogeneity within and across Major Model Categories. Sixty-nine out of seventy hospitals responded, classifying 129 models of care. There was wide variation in a number of important attributes of models classified to the same Major Model Category. The category of 'Public hospital maternity care' contained the most variation across all characteristics. This study demonstrated that although models of care can be grouped into a distinct set of Major Model Categories, there are significant variations in models of the same type. This could result in seemingly 'like' models of care being incorrectly compared if grouped only by the Major Model Category. Copyright © 2017 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,
2013-01-01
Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.
Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.
2013-01-01
Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Probabilistic Graphical Model Representation in Phylogenetics
Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.
2014-01-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559
Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.
Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J
2016-01-01
Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.
Documenting Models for Interoperability and Reusability (proceedings)
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...
Documenting Models for Interoperability and Reusability
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...
Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process
NASA Astrophysics Data System (ADS)
Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.
2018-06-01
A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
A Hybrid 3D Indoor Space Model
NASA Astrophysics Data System (ADS)
Jamali, Ali; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM), Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.
Modified hyperbolic sine model for titanium dioxide-based memristive thin films
NASA Astrophysics Data System (ADS)
Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana
2018-03-01
Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.
Resident Role Modeling: "It Just Happens".
Sternszus, Robert; Macdonald, Mary Ellen; Steinert, Yvonne
2016-03-01
Role modeling by staff physicians is a significant component of the clinical teaching of students and residents. However, the importance of resident role modeling has only recently emerged, and residents' understanding of themselves as role models has yet to be explored. This study sought to understand residents' perceptions of themselves as role models, describe how residents learn about role modeling, and identify ways to improve resident role modeling. Fourteen semistructured interviews were conducted with residents in internal medicine, general surgery, and pediatrics at the McGill University Faculty of Medicine between April and September 2013. Interviews were audio-recorded and subsequently transcribed for analysis; iterative analysis followed principles of qualitative description. Four primary themes were identified through data analysis: residents perceived role modeling as the demonstration of "good" behaviors in the clinical context; residents believed that learning from their role modeling "just happens" as long as learners are "watching"; residents did not equate role modeling with being a role model; and residents learned about role modeling from watching their positive and negative role models. While residents were aware that students and junior colleagues learned from their modeling, they were often not aware of role modeling as it was occurring; they also believed that learning from role modeling "just happens" and did not always see themselves as role models. Helping residents view effective role modeling as a deliberate process rather than something that "just happens" may improve clinical teaching across the continuum of medical education.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
Modeling habitat for Marbled Murrelets on the Siuslaw National Forest, Oregon, using lidar data
Hagar, Joan C.; Aragon, Ramiro; Haggerty, Patricia; Hollenbeck, Jeff P.
2018-03-28
Habitat models using lidar-derived variables that quantify fine-scale variation in vegetation structure can improve the accuracy of occupancy estimates for canopy-dwelling species over models that use variables derived from other remote sensing techniques. However, the ability of models developed at such a fine spatial scale to maintain accuracy at regional or larger spatial scales has not been tested. We tested the transferability of a lidar-based habitat model for the threatened Marbled Murrelet (Brachyramphus marmoratus) between two management districts within a larger regional conservation zone in coastal western Oregon. We compared the performance of the transferred model against models developed with data from the application location. The transferred model had good discrimination (AUC = 0.73) at the application location, and model performance was further improved by fitting the original model with coefficients from the application location dataset (AUC = 0.79). However, the model selection procedure indicated that neither of these transferred models were considered competitive with a model trained on local data. The new model trained on data from the application location resulted in the selection of a slightly different set of lidar metrics from the original model, but both transferred and locally trained models consistently indicated positive relationships between the probability of occupancy and lidar measures of canopy structural complexity. We conclude that while the locally trained model had superior performance for local application, the transferred model could reasonably be applied to the entire conservation zone.
How Qualitative Methods Can be Used to Inform Model Development.
Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna
2017-06-01
Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar
2016-02-15
Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.
Experiments in concept modeling for radiographic image reports.
Bell, D S; Pattison-Gordon, E; Greenes, R A
1994-01-01
OBJECTIVE: Development of methods for building concept models to support structured data entry and image retrieval in chest radiography. DESIGN: An organizing model for chest-radiographic reporting was built by analyzing manually a set of natural-language chest-radiograph reports. During model building, clinician-informaticians judged alternative conceptual structures according to four criteria: content of clinically relevant detail, provision for semantic constraints, provision for canonical forms, and simplicity. The organizing model was applied in representing three sample reports in their entirety. To explore the potential for automatic model discovery, the representation of one sample report was compared with the noun phrases derived from the same report by the CLARIT natural-language processing system. RESULTS: The organizing model for chest-radiographic reporting consists of 62 concept types and 17 relations, arranged in an inheritance network. The broadest types in the model include finding, anatomic locus, procedure, attribute, and status. Diagnoses are modeled as a subtype of finding. Representing three sample reports in their entirety added 79 narrower concept types. Some CLARIT noun phrases suggested valid associations among subtypes of finding, status, and anatomic locus. CONCLUSIONS: A manual modeling process utilizing explicitly stated criteria for making modeling decisions produced an organizing model that showed consistency in early testing. A combination of top-down and bottom-up modeling was required. Natural-language processing may inform model building, but algorithms that would replace manual modeling were not discovered. Further progress in modeling will require methods for objective model evaluation and tools for formalizing the model-building process. PMID:7719807
A strategy to establish Food Safety Model Repositories.
Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M
2015-07-02
Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.
The LUE data model for representation of agents and fields
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2017-04-01
Traditionally, agents-based and field-based modelling environments use different data models to represent the state of information they manipulate. In agent-based modelling, involving the representation of phenomena as objects bounded in space and time, agents are often represented by classes, each of which represents a particular kind of agent and all its properties. Such classes can be used to represent entities like people, birds, cars and countries. In field-based modelling, involving the representation of the environment as continuous fields, fields are often represented by a discretization of space, using multidimensional arrays, each storing mostly a single attribute. Such arrays can be used to represent the elevation of the land-surface, the pH of the soil, or the population density in an area, for example. Representing a population of agents by class instances grouped in collections is an intuitive way of organizing information. A drawback, though, is that models in which class instances grouping properties are stored in collections are less efficient (execute slower) than models in which collections of properties are grouped. The field representation, on the other hand, is convenient for the efficient execution of models. Another drawback is that, because the data models used are so different, integrating agent-based and field-based models becomes difficult, since the model builder has to deal with multiple concepts, and often multiple modelling environments. With the development of the LUE data model [1] we aim at representing agents and fields within a single paradigm, by combining the advantages of the data models used in agent-based and field-based data modelling. This removes the barrier for writing integrated agent-based and field-based models. The resulting data model is intuitive to use and allows for efficient execution of models. LUE is both a high-level conceptual data model and a low-level physical data model. The LUE conceptual data model is a generalization of the data models used in agent-based and field-based modelling. The LUE physical data model [2] is an implementation of the LUE conceptual data model in HDF5. In our presentation we will provide details of our approach to organizing information about agents and fields. We will show examples of agent and field data represented by the conceptual and physical data model. References: [1] de Bakker, M.P., de Jong, K., Schmitz, O., Karssenberg, D., 2016. Design and demonstration of a data model to integrate agent-based and field-based modelling. Environmental Modelling and Software. http://dx.doi.org/10.1016/j.envsoft.2016.11.016 [2] de Jong, K., 2017. LUE source code. https://github.com/pcraster/lue
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
Model and Interoperability using Meta Data Annotations
NASA Astrophysics Data System (ADS)
David, O.
2011-12-01
Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside. While providing all those capabilities, a significant reduction in the size of the model source code was achieved. To support the benefit of annotations for a modeler, studies were conducted to evaluate the effectiveness of an annotation based framework approach with other modeling frameworks and libraries, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A typical hydrological model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Seaman, Shaun R; Hughes, Rachael A
2018-06-01
Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.
Lessons from Climate Modeling on the Design and Use of Ensembles for Crop Modeling
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Mearns, Linda O.; Ruane, Alexander C.; Roetter, Reimund P.; Asseng, Senthold
2016-01-01
Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.
Assessing Ecosystem Model Performance in Semiarid Systems
NASA Astrophysics Data System (ADS)
Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.
2017-12-01
In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.
Alcan, Toros; Ceylanoğlu, Cenk; Baysal, Bekir
2009-01-01
To investigate the effects of different storage periods of alginate impressions on digital model accuracy. A total of 105 impressions were taken from a master model with three different brands of alginates and were poured into stone models in five different storage periods. In all, 21 stone models were poured and immediately were scanned, and 21 digital models were prepared. The remaining 84 impressions were poured after 1, 2, 3, and 4 days, respectively. Five linear measurements were made by three researchers on the master model, the stone models, and the digital models. Time-dependent deformation of alginate impressions at different storage periods and the accuracy of traditional stone models and digital models were evaluated separately. Both the stone models and the digital models were highly correlated with the master model. Significant deformities in the alginate impressions were noted at different storage periods of 1 to 4 days. Alginate impressions of different brands also showed significant differences between each other on the first, third, and fourth days. Digital orthodontic models are as reliable as traditional stone models and probably will become the standard for orthodontic clinical use. Storing alginate impressions in sealed plastic bags for up to 4 days caused statistically significant deformation of alginate impressions, but the magnitude of these deformations did not appear to be clinically relevant and had no adverse effect on digital modeling.
Chen, Honglei; Chen, Yuancai; Zhan, Huaiyu; Fu, Shiyu
2011-04-01
A new method has been developed for the determination of chemical oxygen demand (COD) in pulping effluent using chemometrics-assisted spectrophotometry. Two calibration models were established by inducing UV-visible spectroscopy (model 1) and derivative spectroscopy (model 2), combined with the chemometrics software Smica-P. Correlation coefficients of the two models are 0.9954 (model 1) and 0.9963 (model 2) when COD of samples is in the range of 0 to 405 mg/L. Sensitivities of the two models are 0.0061 (model 1) and 0.0056 (model 2) and method detection limits are 2.02-2.45 mg/L (model 1) and 2.13-2.51 mg/L (model 2). Validation experiment showed that the average standard deviation of model 2 was 1.11 and that of model 1 was 1.54. Similarly, average relative error of model 2 (4.25%) was lower than model 1 (5.00%), which indicated that the predictability of model 2 was better than that of model 1. Chemometrics-assisted spectrophotometry method did not need chemical reagents and digestion which were required in the conventional methods, and the testing time of the new method was significantly shorter than the conventional ones. The proposed method can be used to measure COD in pulping effluent as an environmentally friendly approach with satisfactory results.
Improved two-equation k-omega turbulence models for aerodynamic flows
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1992-01-01
Two new versions of the k-omega two-equation turbulence model will be presented. The new Baseline (BSL) model is designed to give results similar to those of the original k-omega model of Wilcox, but without its strong dependency on arbitrary freestream values. The BSL model is identical to the Wilcox model in the inner 50 percent of the boundary-layer but changes gradually to the high Reynolds number Jones-Launder k-epsilon model (in a k-omega formulation) towards the boundary-layer edge. The new model is also virtually identical to the Jones-Lauder model for free shear layers. The second version of the model is called Shear-Stress Transport (SST) model. It is based on the BSL model, but has the additional ability to account for the transport of the principal shear stress in adverse pressure gradient boundary-layers. The model is based on Bradshaw's assumption that the principal shear stress is proportional to the turbulent kinetic energy, which is introduced into the definition of the eddy-viscosity. Both models are tested for a large number of different flowfields. The results of the BSL model are similar to those of the original k-omega model, but without the undesirable freestream dependency. The predictions of the SST model are also independent of the freestream values and show excellent agreement with experimental data for adverse pressure gradient boundary-layer flows.
Efficient polarimetric BRDF model.
Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D
2015-11-30
The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.
SBML Level 3 package: Hierarchical Model Composition, Version 1 Release 3
Smith, Lucian P.; Hucka, Michael; Hoops, Stefan; Finney, Andrew; Ginkel, Martin; Myers, Chris J.; Moraru, Ion; Liebermeister, Wolfram
2017-01-01
Summary Constructing a model in a hierarchical fashion is a natural approach to managing model complexity, and offers additional opportunities such as the potential to re-use model components. The SBML Level 3 Version 1 Core specification does not directly provide a mechanism for defining hierarchical models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Hierarchical Model Composition package for SBML Level 3 adds the necessary features to SBML to support hierarchical modeling. The package enables a modeler to include submodels within an enclosing SBML model, delete unneeded or redundant elements of that submodel, replace elements of that submodel with element of the containing model, and replace elements of the containing model with elements of the submodel. In addition, the package defines an optional “port” construct, allowing a model to be defined with suggested interfaces between hierarchical components; modelers can chose to use these interfaces, but they are not required to do so and can still interact directly with model elements if they so chose. Finally, the SBML Hierarchical Model Composition package is defined in such a way that a hierarchical model can be “flattened” to an equivalent, non-hierarchical version that uses only plain SBML constructs, thus enabling software tools that do not yet support hierarchy to nevertheless work with SBML hierarchical models. PMID:26528566
A demonstrative model of a lunar base simulation on a personal computer
NASA Technical Reports Server (NTRS)
1985-01-01
The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.
Molenaar, Peter C M
2017-01-01
Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.
Potocki, J K; Tharp, H S
1993-01-01
Multiple model estimation is a viable technique for dealing with the spatial perfusion model mismatch associated with hyperthermia dosimetry. Using multiple models, spatial discrimination can be obtained without increasing the number of unknown perfusion zones. Two multiple model estimators based on the extended Kalman filter (EKF) are designed and compared with two EKFs based on single models having greater perfusion zone segmentation. Results given here indicate that multiple modelling is advantageous when the number of thermal sensors is insufficient for convergence of single model estimators having greater perfusion zone segmentation. In situations where sufficient measured outputs exist for greater unknown perfusion parameter estimation, the multiple model estimators and the single model estimators yield equivalent results.
Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models
Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.
2011-01-01
We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.
EzGal: A Flexible Interface for Stellar Population Synthesis Models
NASA Astrophysics Data System (ADS)
Mancone, Conor L.; Gonzalez, Anthony H.
2012-06-01
We present EzGal, a flexible Python program designed to easily generate observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models. As has been demonstrated by various authors, for many applications the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. Its ability to work with new models will allow EzGal to remain useful as SPS modeling evolves to keep up with the latest research (such as varying IMFs). EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and it can be used to interpolate between metallicities for a given model set. To facilitate use, we have created an online interface to run EzGal and quickly generate magnitude and mass-to-light ratio predictions for a variety of star-formation histories and model sets. We make many commonly used SPS models available from the online interface, including the canonical Bruzual & Charlot models, an updated version of these models, the Maraston models, the BaSTI models, and the Flexible Stellar Population Synthesis (FSPS) models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star-formation history. From this comparison we quickly recover the well-known result that the models agree best in the optical for old solar-metallicity models, with differences at the level. Similarly, the most problematic regime for SPS modeling is for young ages (≲2 Gyr) and long wavelengths (λ ≳ 7500 Å), where thermally pulsating AGB stars are important and scatter between models can vary from 0.3 mag (Sloan i) to 0.7 mag (Ks). We find that these differences are not caused by one discrepant model set and should therefore be interpreted as general uncertainties in SPS modeling. Finally, we connect our results to a more physically motivated example by generating CSPs with a star-formation history matching the global star-formation history of the universe. We demonstrate that the wavelength and age dependence of SPS model uncertainty translates into a redshift-dependent model uncertainty, highlighting the importance of a quantitative understanding of model differences when comparing observations with models as a function of redshift.
System and method of designing models in a feedback loop
Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.
2017-02-14
A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.
Comment on ``Glassy Potts model: A disordered Potts model without a ferromagnetic phase''
NASA Astrophysics Data System (ADS)
Carlucci, Domenico M.
1999-10-01
We report the equivalence of the ``glassy Potts model,'' recently introduced by Marinari et al. and the ``chiral Potts model'' investigated by Nishimori and Stephen. Both models do not exhibit any spontaneous magnetization at low temperature, differently from the ordinary glass Potts model. The phase transition of the glassy Potts model is easily interpreted as the spin-glass transition of the ordinary random Potts model.
NASA Astrophysics Data System (ADS)
Cannizzo, John K.
2017-01-01
We utilize the time dependent accretion disk model described by Ichikawa & Osaki (1992) to explore two basic ideas for the outbursts in the SU UMa systems, Osaki's Thermal-Tidal Model, and the basic accretion disk limit cycle model. We explore a range in possible input parameters and model assumptions to delineate under what conditions each model may be preferred.
A novel microfluidic model can mimic organ-specific metastasis of circulating tumor cells.
Kong, Jing; Luo, Yong; Jin, Dong; An, Fan; Zhang, Wenyuan; Liu, Lilu; Li, Jiao; Fang, Shimeng; Li, Xiaojie; Yang, Xuesong; Lin, Bingcheng; Liu, Tingjiao
2016-11-29
A biomimetic microsystem might compensate costly and time-consuming animal metastatic models. Herein we developed a biomimetic microfluidic model to study cancer metastasis. Primary cells isolated from different organs were cultured on the microlfuidic model to represent individual organs. Breast and salivary gland cancer cells were driven to flow over primary cell culture chambers, mimicking dynamic adhesion of circulating tumor cells (CTCs) to endothelium in vivo. These flowing artificial CTCs showed different metastatic potentials to lung on the microfluidic model. The traditional nude mouse model of lung metastasis was performed to investigate the physiological similarity of the microfluidic model to animal models. It was found that the metastatic potential of different cancer cells assessed by the microfluidic model was in agreement with that assessed by the nude mouse model. Furthermore, it was demonstrated that the metastatic inhibitor AMD3100 inhibited lung metastasis effectively in both the microfluidic model and the nude mouse model. Then the microfluidic model was used to mimick liver and bone metastasis of CTCs and confirm the potential for research of multiple-organ metastasis. Thus, the metastasis of CTCs to different organs was reconstituted on the microfluidic model. It may expand the capabilities of traditional cell culture models, providing a low-cost, time-saving, and rapid alternative to animal models.
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
Mutant mice: experimental organisms as materialised models in biomedicine.
Huber, Lara; Keuck, Lara K
2013-09-01
Animal models have received particular attention as key examples of material models. In this paper, we argue that the specificities of establishing animal models-acknowledging their status as living beings and as epistemological tools-necessitate a more complex account of animal models as materialised models. This becomes particularly evident in animal-based models of diseases that only occur in humans: in these cases, the representational relation between animal model and human patient needs to be generated and validated. The first part of this paper presents an account of how disease-specific animal models are established by drawing on the example of transgenic mice models for Alzheimer's disease. We will introduce an account of validation that involves a three-fold process including (1) from human being to experimental organism; (2) from experimental organism to animal model; and (3) from animal model to human patient. This process draws upon clinical relevance as much as scientific practices and results in disease-specific, yet incomplete, animal models. The second part of this paper argues that the incompleteness of models can be described in terms of multi-level abstractions. We qualify this notion by pointing to different experimental techniques and targets of modelling, which give rise to a plurality of models for a specific disease. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bachis, Giulia; Maruéjouls, Thibaud; Tik, Sovanna; Amerlinck, Youri; Melcer, Henryk; Nopens, Ingmar; Lessard, Paul; Vanrolleghem, Peter A
2015-01-01
Characterization and modelling of primary settlers have been neglected pretty much to date. However, whole plant and resource recovery modelling requires primary settler model development, as current models lack detail in describing the dynamics and the diversity of the removal process for different particulate fractions. This paper focuses on the improved modelling and experimental characterization of primary settlers. First, a new modelling concept based on particle settling velocity distribution is proposed which is then applied for the development of an improved primary settler model as well as for its characterization under addition of chemicals (chemically enhanced primary treatment, CEPT). This model is compared to two existing simple primary settler models (Otterpohl and Freund; Lessard and Beck), showing to be better than the first one and statistically comparable to the second one, but with easier calibration thanks to the ease with which wastewater characteristics can be translated into model parameters. Second, the changes in the activated sludge model (ASM)-based chemical oxygen demand fractionation between inlet and outlet induced by primary settling is investigated, showing that typical wastewater fractions are modified by primary treatment. As they clearly impact the downstream processes, both model improvements demonstrate the need for more detailed primary settler models in view of whole plant modelling.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Predictive QSAR modeling workflow, model applicability domains, and virtual screening.
Tropsha, Alexander; Golbraikh, Alexander
2007-01-01
Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.
Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V
2014-01-01
Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.
Marzilli Ericson, Keith M.; White, John Myles; Laibson, David; Cohen, Jonathan D.
2015-01-01
Heuristic models have been proposed for many domains of choice. We compare heuristic models of intertemporal choice, which can account for many of the known intertemporal choice anomalies, to discounting models. We conduct an out-of-sample, cross-validated comparison of intertemporal choice models. Heuristic models outperform traditional utility discounting models, including models of exponential and hyperbolic discounting. The best performing models predict choices by using a weighted average of absolute differences and relative (percentage) differences of the attributes of the goods in a choice set. We conclude that heuristic models explain time-money tradeoff choices in experiments better than utility discounting models. PMID:25911124
ASTP ranging system mathematical model
NASA Technical Reports Server (NTRS)
Ellis, M. R.; Robinson, L. H.
1973-01-01
A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-01-01
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-02-08
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. Keating; W.Statham
2004-02-12
The purpose of this model report is to provide documentation of the conceptual and mathematical model (ASHPLUME) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. The ASHPLUME conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through the Yucca Mountain repository and downwind transport of contaminated tephra. The ASHPLUME mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the groundmore » surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report will improve and clarify the previous documentation of the ASHPLUME mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model.« less
Model-Based Reasoning in Upper-division Lab Courses
NASA Astrophysics Data System (ADS)
Lewandowski, Heather
2015-05-01
Modeling, which includes developing, testing, and refining models, is a central activity in physics. Well-known examples from AMO physics include everything from the Bohr model of the hydrogen atom to the Bose-Hubbard model of interacting bosons in a lattice. Modeling, while typically considered a theoretical activity, is most fully represented in the laboratory where measurements of real phenomena intersect with theoretical models, leading to refinement of models and experimental apparatus. However, experimental physicists use models in complex ways and the process is often not made explicit in physics laboratory courses. We have developed a framework to describe the modeling process in physics laboratory activities. The framework attempts to abstract and simplify the complex modeling process undertaken by expert experimentalists. The framework can be applied to understand typical processes such the modeling of the measurement tools, modeling ``black boxes,'' and signal processing. We demonstrate that the framework captures several important features of model-based reasoning in a way that can reveal common student difficulties in the lab and guide the development of curricula that emphasize modeling in the laboratory. We also use the framework to examine troubleshooting in the lab and guide students to effective methods and strategies.
2013-01-01
Background The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. Methods We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic. Results We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically. Conclusions We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison. PMID:23651557
Model Selection in Systems Biology Depends on Experimental Design
Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.
2014-01-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483
Model selection in systems biology depends on experimental design.
Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H
2014-06-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.
2016-12-01
Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Strategic directions for agent-based modeling: avoiding the YAAWN syndrome.
O'Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris
In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances - in terms of model complexity, model evaluation, and model structure - can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from 'yet another model' to doing better science with models.
A Two-Zone Multigrid Model for SI Engine Combustion Simulation Using Detailed Chemistry
Ge, Hai-Wen; Juneja, Harmit; Shi, Yu; ...
2010-01-01
An efficient multigrid (MG) model was implemented for spark-ignited (SI) engine combustion modeling using detailed chemistry. The model is designed to be coupled with a level-set-G-equation model for flame propagation (GAMUT combustion model) for highly efficient engine simulation. The model was explored for a gasoline direct-injection SI engine with knocking combustion. The numerical results using the MG model were compared with the results of the original GAMUT combustion model. A simpler one-zone MG model was found to be unable to reproduce the results of the original GAMUT model. However, a two-zone MG model, which treats the burned and unburned regionsmore » separately, was found to provide much better accuracy and efficiency than the one-zone MG model. Without loss in accuracy, an order of magnitude speedup was achieved in terms of CPU and wall times. To reproduce the results of the original GAMUT combustion model, either a low searching level or a procedure to exclude high-temperature computational cells from the grouping should be applied to the unburned region, which was found to be more sensitive to the combustion model details.« less
Statistical considerations on prognostic models for glioma
Molinaro, Annette M.; Wrensch, Margaret R.; Jenkins, Robert B.; Eckel-Passow, Jeanette E.
2016-01-01
Given the lack of beneficial treatments in glioma, there is a need for prognostic models for therapeutic decision making and life planning. Recently several studies defining subtypes of glioma have been published. Here, we review the statistical considerations of how to build and validate prognostic models, explain the models presented in the current glioma literature, and discuss advantages and disadvantages of each model. The 3 statistical considerations to establishing clinically useful prognostic models are: study design, model building, and validation. Careful study design helps to ensure that the model is unbiased and generalizable to the population of interest. During model building, a discovery cohort of patients can be used to choose variables, construct models, and estimate prediction performance via internal validation. Via external validation, an independent dataset can assess how well the model performs. It is imperative that published models properly detail the study design and methods for both model building and validation. This provides readers the information necessary to assess the bias in a study, compare other published models, and determine the model's clinical usefulness. As editors, reviewers, and readers of the relevant literature, we should be cognizant of the needed statistical considerations and insist on their use. PMID:26657835
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Ting, Eric; Nguyen, Daniel; Dao, Tung; Trinh, Khanh
2013-01-01
This paper presents a coupled vortex-lattice flight dynamic model with an aeroelastic finite-element model to predict dynamic characteristics of a flexible wing transport aircraft. The aircraft model is based on NASA Generic Transport Model (GTM) with representative mass and stiffness properties to achieve a wing tip deflection about twice that of a conventional transport aircraft (10% versus 5%). This flexible wing transport aircraft is referred to as an Elastically Shaped Aircraft Concept (ESAC) which is equipped with a Variable Camber Continuous Trailing Edge Flap (VCCTEF) system for active wing shaping control for drag reduction. A vortex-lattice aerodynamic model of the ESAC is developed and is coupled with an aeroelastic finite-element model via an automated geometry modeler. This coupled model is used to compute static and dynamic aeroelastic solutions. The deflection information from the finite-element model and the vortex-lattice model is used to compute unsteady contributions to the aerodynamic force and moment coefficients. A coupled aeroelastic-longitudinal flight dynamic model is developed by coupling the finite-element model with the rigid-body flight dynamic model of the GTM.
An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements
NASA Astrophysics Data System (ADS)
Zhai, Zhongxu; Blanton, Michael; Slosar, Anže; Tinker, Jeremy
2017-12-01
We compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtaining data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.
Adaptive Modeling of the International Space Station Electrical Power System
NASA Technical Reports Server (NTRS)
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
Models and Measurements Intercomparison 2
NASA Technical Reports Server (NTRS)
Park, Jae H. (Editor); Ko, Malcolm K. W. (Editor); Jackman, Charles H. (Editor); Plumb, R. Alan (Editor); Kaye, Jack A. (Editor); Sage, Karen H. (Editor)
1999-01-01
Models and Measurement Intercomparison II (MM II) summarizes the intercomparison of results from model simulations and observations of stratospheric species. Representatives from twenty-three modeling groups using twenty-nine models participated in these MM II exercises between 1996 and 1999. Twelve of the models were two- dimensional zonal-mean models while seventeen were three-dimensional models. This was an international effort as seven were from outside the United States. Six transport experiments and five chemistry experiments were designed for various models. Models participating in the transport experiments performed simulations of chemically inert tracers providing diagnostics for transport. The chemistry experiments involved simulating the distributions of chemically active trace cases including ozone. The model run conditions for dynamics and chemistry were prescribed in order to minimize the factors that caused differences in the models. The report includes a critical review of the results by the participants and a discussion of the causes of differences between modeled and measured results as well as between results from different models, A sizable effort went into preparation of the database of the observations. This included a new climatology for ozone. The report should help in evaluating the results from various predictive models for assessing humankind perturbations of the stratosphere.
A Logical Account of Diagnosis with Multiple Theories
NASA Technical Reports Server (NTRS)
Pandurang, P.; Lum, Henry Jr. (Technical Monitor)
1994-01-01
Model-based diagnosis is a powerful, first-principles approach to diagnosis. The primary drawback with model-based diagnosis is that it is based on a system model, and this model might be inappropriate. The inappropriateness of models usually stems from the fundamental tradeoff between completeness and efficiency. Recently, Struss has developed an elegant proposal for diagnosis with multiple models. Struss characterizes models as relations and develops a precise notion of abstraction. He defines relations between models and analyzes the effect of a model switch on the space of possible diagnoses. In this paper we extend Struss's proposal in three ways. First, our account of diagnosis with multiple models is based on representing models as more expressive first-order theories, rather than as relations. A key technical contribution is the use of a general notion of abstraction based on interpretations between theories. Second, Struss conflates component modes with models, requiring him to define models relations such as choices which result in non-relational models. We avoid this problem by differentiating component modes from models. Third, we present a more general account of simplifications that correctly handles situations where the simplification contradicts the base theory.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Numerical Modeling in Geodynamics: Success, Failure and Perspective
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.
2005-12-01
A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.
Predictive models of radiative neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julio, J., E-mail: julio@lipi.go.id
2016-06-21
We discuss two models of radiative neutrino mass generation. The first model features one–loop Zee model with Z{sub 4} symmetry. The second model is the two–loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.
USDA-ARS?s Scientific Manuscript database
To improve climate change impact estimates, multi-model ensembles (MMEs) have been suggested. MMEs enable quantifying model uncertainty, and their medians are more accurate than that of any single model when compared with observations. However, multi-model ensembles are costly to execute, so model i...
A Comparative Analysis on Models of Higher Education Massification
ERIC Educational Resources Information Center
Pan, Maoyuan; Luo, Dan
2008-01-01
Four financial models of massification of higher education are discussed in this essay. They are American model, Western European model, Southeast Asian and Latin American model and the transition countries model. The comparison of the four models comes to the conclusion that taking advantage of nongovernmental funding is fundamental to dealing…
A Model for General Parenting Skill is Too Simple: Mediational Models Work Better.
ERIC Educational Resources Information Center
Patterson, G. R.; Yoerger, K.
A study was designed to determine whether mediational models of parenting patterns account for significantly more variance in academic achievement than more general models. Two general models and two mediational models were considered. The first model identified five skills: (1) discipline; (2) monitoring; (3) family problem solving; (4) positive…
Frank R., III Thompson
2009-01-01
Habitat models are widely used in bird conservation planning to assess current habitat or populations and to evaluate management alternatives. These models include species-habitat matrix or database models, habitat suitability models, and statistical models that predict abundance. While extremely useful, these approaches have some limitations.
ERIC Educational Resources Information Center
Cheng, Meng-Fei; Lin, Jang-Long
2015-01-01
Understanding the nature of models and engaging in modeling practice have been emphasized in science education. However, few studies discuss the relationships between students' views of scientific models and their ability to develop those models. Hence, this study explores the relationship between students' views of scientific models and their…
Integrated research in constitutive modelling at elevated temperatures, part 2
NASA Technical Reports Server (NTRS)
Haisler, W. E.; Allen, D. H.
1986-01-01
Four current viscoplastic models are compared experimentally with Inconel 718 at 1100 F. A series of tests were performed to create a sufficient data base from which to evaluate material constants. The models used include Bodner's anisotropic model; Krieg, Swearengen, and Rhode's model; Schmidt and Miller's model; and Walker's exponential model.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
NASA Astrophysics Data System (ADS)
Jang, S.; Moon, Y.; Na, H.
2012-12-01
We have made a comparison of CME-associated shock arrival times at the earth based on the WSA-ENLIL model with three cone models using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters from Michalek et al. (2007) as well as their associated interplanetary (IP) shocks. For this study we consider three different cone models (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine CME cone parameters (radial velocity, angular width and source location), which are used for input parameters of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the elliptical cone model is 10 hours, which is about 2 hours smaller than those of the other models. However, this value is still larger than that (8.7 hours) of an empirical model by Kim et al. (2007). We are investigating several possibilities on relatively large errors of the WSA-ENLIL cone model, which may be caused by CME-CME interaction, background solar wind speed, and/or CME density enhancement.
Modeling of the radiation belt megnetosphere in decisional timeframes
Koller, Josef; Reeves, Geoffrey D; Friedel, Reiner H.W.
2013-04-23
Systems and methods for calculating L* in the magnetosphere with essentially the same accuracy as with a physics based model at many times the speed by developing a surrogate trained to be a surrogate for the physics-based model. The trained model can then beneficially process input data falling within the training range of the surrogate model. The surrogate model can be a feedforward neural network and the physics-based model can be the TSK03 model. Operatively, the surrogate model can use parameters on which the physics-based model was based, and/or spatial data for the location where L* is to be calculated. Surrogate models should be provided for each of a plurality of pitch angles. Accordingly, a surrogate model having a closed drift shell can be used from the plurality of models. The feedforward neural network can have a plurality of input-layer units, there being at least one input-layer unit for each physics-based model parameter, a plurality of hidden layer units and at least one output unit for the value of L*.
Cowell, Rosemary A; Bussey, Timothy J; Saksida, Lisa M
2012-11-01
We describe how computational models can be useful to cognitive and behavioral neuroscience, and discuss some guidelines for deciding whether a model is useful. We emphasize that because instantiating a cognitive theory as a computational model requires specification of an explicit mechanism for the function in question, it often produces clear and novel behavioral predictions to guide empirical research. However, computational modeling in cognitive and behavioral neuroscience remains somewhat rare, perhaps because of misconceptions concerning the use of computational models (in particular, connectionist models) in these fields. We highlight some common misconceptions, each of which relates to an aspect of computational models: the problem space of the model, the level of biological organization at which the model is formulated, and the importance (or not) of biological plausibility, parsimony, and model parameters. Careful consideration of these aspects of a model by empiricists, along with careful delineation of them by modelers, may facilitate communication between the two disciplines and promote the use of computational models for guiding cognitive and behavioral experiments. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ting, Eric; Nguyen, Nhan; Trinh, Khanh
2014-01-01
This paper presents a static aeroelastic model and longitudinal trim model for the analysis of a flexible wing transport aircraft. The static aeroelastic model is built using a structural model based on finite-element modeling and coupled to an aerodynamic model that uses vortex-lattice solution. An automatic geometry generation tool is used to close the loop between the structural and aerodynamic models. The aeroelastic model is extended for the development of a three degree-of-freedom longitudinal trim model for an aircraft with flexible wings. The resulting flexible aircraft longitudinal trim model is used to simultaneously compute the static aeroelastic shape for the aircraft model and the longitudinal state inputs to maintain an aircraft trim state. The framework is applied to an aircraft model based on the NASA Generic Transport Model (GTM) with wing structures allowed to flexibly deformed referred to as the Elastically Shaped Aircraft Concept (ESAC). The ESAC wing mass and stiffness properties are based on a baseline "stiff" values representative of current generation transport aircraft.
Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara
2016-05-09
A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.
Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.
2013-01-01
The gas holdup time (tM) is a dominant parameter in gas chromatographic retention models. The difference equation (DE) model proposed by Wu et al. (J. Chromatogr. A 2012, http://dx.doi.org/10.1016/j.chroma.2012.07.077) excluded tM. In the present paper, we propose that the relationship between the adjusted retention time tRZ′ and carbon number z of n-alkanes follows a quadratic equation (QE) when an accurate tM is obtained. This QE model is the same as or better than the DE model for an accurate expression of the retention behavior of n-alkanes and model applications. The QE model covers a larger range of n-alkanes with better curve fittings than the linear model. The accuracy of the QE model was approximately 2–6 times better than the DE model and 18–540 times better than the LE model. Standard deviations of the QE model were approximately 2–3 times smaller than those of the DE model. PMID:22989489
Modelling daily water temperature from air temperature for the Missouri River.
Zhu, Senlin; Nyarko, Emmanuel Karlo; Hadzima-Nyarko, Marijana
2018-01-01
The bio-chemical and physical characteristics of a river are directly affected by water temperature, which thereby affects the overall health of aquatic ecosystems. It is a complex problem to accurately estimate water temperature. Modelling of river water temperature is usually based on a suitable mathematical model and field measurements of various atmospheric factors. In this article, the air-water temperature relationship of the Missouri River is investigated by developing three different machine learning models (Artificial Neural Network (ANN), Gaussian Process Regression (GPR), and Bootstrap Aggregated Decision Trees (BA-DT)). Standard models (linear regression, non-linear regression, and stochastic models) are also developed and compared to machine learning models. Analyzing the three standard models, the stochastic model clearly outperforms the standard linear model and nonlinear model. All the three machine learning models have comparable results and outperform the stochastic model, with GPR having slightly better results for stations No. 2 and 3, while BA-DT has slightly better results for station No. 1. The machine learning models are very effective tools which can be used for the prediction of daily river temperature.
Nicolas, Renaud; Sibon, Igor; Hiba, Bassem
2015-01-01
The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm(2) in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue.
Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial.
Krijkamp, Eline M; Alarid-Escudero, Fernando; Enns, Eva A; Jalal, Hawre J; Hunink, M G Myriam; Pechlivanoglou, Petros
2018-04-01
Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions.
Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data
NASA Astrophysics Data System (ADS)
Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai
2017-11-01
Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.
NASA Astrophysics Data System (ADS)
Benettin, G.; Pasquali, S.; Ponno, A.
2018-05-01
FPU models, in dimension one, are perturbations either of the linear model or of the Toda model; perturbations of the linear model include the usual β -model, perturbations of Toda include the usual α +β model. In this paper we explore and compare two families, or hierarchies, of FPU models, closer and closer to either the linear or the Toda model, by computing numerically, for each model, the maximal Lyapunov exponent χ . More precisely, we consider statistically typical trajectories and study the asymptotics of χ for large N (the number of particles) and small ɛ (the specific energy E / N), and find, for all models, asymptotic power laws χ ˜eq Cɛ ^a, C and a depending on the model. The asymptotics turns out to be, in general, rather slow, and producing accurate results requires a great computational effort. We also revisit and extend the analytic computation of χ introduced by Casetti, Livi and Pettini, originally formulated for the β -model. With great evidence the theory extends successfully to all models of the linear hierarchy, but not to models close to Toda.
An improved gray lattice Boltzmann model for simulating fluid flow in multi-scale porous media
NASA Astrophysics Data System (ADS)
Zhu, Jiujiang; Ma, Jingsheng
2013-06-01
A lattice Boltzmann (LB) model is proposed for simulating fluid flow in porous media by allowing the aggregates of finer-scale pores and solids to be treated as 'equivalent media'. This model employs a partially bouncing-back scheme to mimic the resistance of each aggregate, represented as a gray node in the model, to the fluid flow. Like several other lattice Boltzmann models that take the same approach, which are collectively referred to as gray lattice Boltzmann (GLB) models in this paper, it introduces an extra model parameter, ns, which represents a volume fraction of fluid particles to be bounced back by the solid phase rather than the volume fraction of the solid phase at each gray node. The proposed model is shown to conserve the mass even for heterogeneous media, while this model and that model of Walsh et al. (2009) [1], referred to the WBS model thereafter, are shown analytically to recover Darcy-Brinkman's equations for homogenous and isotropic porous media where the effective viscosity and the permeability are related to ns and the relaxation parameter of LB model. The key differences between these two models along with others are analyzed while their implications are highlighted. An attempt is made to rectify the misconception about the model parameter ns being the volume fraction of the solid phase. Both models are then numerically verified against the analytical solutions for a set of homogenous porous models and compared each other for another two sets of heterogeneous porous models of practical importance. It is shown that the proposed model allows true no-slip boundary conditions to be incorporated with a significant effect on reducing errors that would otherwise heavily skew flow fields near solid walls. The proposed model is shown to be numerically more stable than the WBS model at solid walls and interfaces between two porous media. The causes to the instability in the latter case are examined. The link between these two GLB models and a generalized Navier-Stokes model [2] for heterogeneous but isotropic porous media are explored qualitatively. A procedure for estimating model parameter ns is proposed.
NASA Astrophysics Data System (ADS)
Peckham, S. D.; DeLuca, C.; Gochis, D. J.; Arrigo, J.; Kelbert, A.; Choi, E.; Dunlap, R.
2014-12-01
In order to better understand and predict environmental hazards of weather/climate, ecology and deep earth processes, geoscientists develop and use physics-based computational models. These models are used widely both in academic and federal communities. Because of the large effort required to develop and test models, there is widespread interest in component-based modeling, which promotes model reuse and simplified coupling to tackle problems that often cross discipline boundaries. In component-based modeling, the goal is to make relatively small changes to models that make it easy to reuse them as "plug-and-play" components. Sophisticated modeling frameworks exist to rapidly couple these components to create new composite models. They allow component models to exchange variables while accommodating different programming languages, computational grids, time-stepping schemes, variable names and units. Modeling frameworks have arisen in many modeling communities. CSDMS (Community Surface Dynamics Modeling System) serves the academic earth surface process dynamics community, while ESMF (Earth System Modeling Framework) serves many federal Earth system modeling projects. Others exist in both the academic and federal domains and each satisfies design criteria that are determined by the community they serve. While they may use different interface standards or semantic mediation strategies, they share fundamental similarities. The purpose of the Earth System Bridge project is to develop mechanisms for interoperability between modeling frameworks, such as the ability to share a model or service component. This project has three main goals: (1) Develop a Framework Description Language (ES-FDL) that allows modeling frameworks to be described in a standard way so that their differences and similarities can be assessed. (2) Demonstrate that if a model is augmented with a framework-agnostic Basic Model Interface (BMI), then simple, universal adapters can go from BMI to a modeling framework's native component interface. (3) Create semantic mappings between modeling frameworks that support semantic mediation. This third goal involves creating a crosswalk between the CF Standard Names and the CSDMS Standard Names (a set of naming conventions). This talk will summarize progress towards these goals.