Mapping Pluto's Temperature Distribution Through Twenty Years of Stellar Occultations
NASA Astrophysics Data System (ADS)
Zangari, Amanda; Binzel, R. P.; Person, M. J.
2012-10-01
Multi-chord, high signal-to-noise Pluto occultations have been observed several times over the past two decades, including events in 1988, 2002, 2006, 2007, 2010 and 2011 (Elliot et al. 1989, 2003, 2007; Person et al. 2008, 2010, 2011). We fit separate immersion and emersion occultation light-curve models to each of the individual light curves obtained from these efforts. Asymmetries in the light curves result in the half-light temperatures for opposite sides of a single chord to differ by up to 20 Kelvin in the largest case. The temperature difference for each chord is consistent using both isothermal (b=0) and non-isothermal (e.g. b=-2.2) models based on the methodology described by Elliot & Young (1992). We examine the relationship between the location of immersion and emersion points on Pluto and these temperatures at the half-light radius and will present results for correlations between these location/temperature data and surface composition maps, Pluto geometry, and accumulated insolation patterns. This work was supported by NASA Planetary Astronomy Grant to MIT (NNX10AB27G), and NSF Astronomy and Astrophysics Grant to MIT (0707609). The authors would like to acknowledge the late Professor James L. Elliot for his efforts in beginning this work. References: Elliot, J. L., Dunham, E. W., Bosh, A. S., et al. 1989, Icarus, 77,148 Elliot, J. L., Ates, A., Babcock, B. A., et al. 2003, Nature, 424,165 Elliot, J. L., Person, M. J., Gulbis, A. A. S., et al. 2007, AJ, 134, 1 Elliot, J. L., & Young, L. A. 1992, AJ, 103, 991. Person, M. J., Elliot, J. L., Gulbis, A. A. S., et al. 2008, AJ, 136, 1510 Person, M. J., Elliot, J. L., Bosh, A. S., et al. 2010, Bulletin of the American Astronomical Society, 42, 983 Person, M. J., Dunham, E. W., Bida, T., et al. 2011, EPSC-DPS Joint Meeting 2011, 1374.
Achievement goals, self-handicapping, and performance: a 2 x 2 achievement goal perspective.
Ntoumanis, Nikos; Thøgersen-Ntoumani, Cecilie; Smith, Alison L
2009-11-01
Elliot and colleagues (2006) examined the effects of experimentally induced achievement goals, proposed by the trichotomous model, on self-handicapping and performance in physical education. Our study replicated and extended the work of Elliot et al. by experimentally promoting all four goals proposed by the 2 x 2 model (Elliot & McGregor, 2001), measuring the participants' own situational achievement goals, using a relatively novel task, and testing the participants in a group setting. We used a randomized experimental design with four conditions that aimed to induce one of the four goals advanced by the 2 x 2 model. The participants (n = 138) were undergraduates who engaged in a dart-throwing task. The results pertaining to self-handicapping partly replicated Elliot and colleagues' findings by showing that experimentally promoted performance-avoidance goals resulted in less practice. In contrast, the promotion of mastery-avoidance goals did not result in less practice compared with either of the approach goals. Dart-throwing performance did not differ among the four goal conditions. Personal achievement goals did not moderate the effects of experimentally induced goals on self-handicapping and performance. The extent to which mastery-avoidance goals are maladaptive is discussed, as well as the interplay between personal and experimentally induced goals.
2. WILLIAM ELLIOT CABIN AND OUTBULIDING, CABIN WEST REAR AND ...
2. WILLIAM ELLIOT CABIN AND OUTBULIDING, CABIN WEST REAR AND NORTH SIDES, OUTBULIDING WEST FRONT AND NORTH SIDE - Liberty Historic District, William Elliot Cabin, Route 2, Cle Elum, Liberty, Kittitas County, WA
1. WILLIAM ELLIOT CABIN AND OUTBUILDING, CABIN EAST FRONT AND ...
1. WILLIAM ELLIOT CABIN AND OUTBUILDING, CABIN EAST FRONT AND SOUTH SIDE, OUTBUILDING EAST REAR AND SOUTH SIDES - Liberty Historic District, William Elliot Cabin, Route 2, Cle Elum, Liberty, Kittitas County, WA
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
...; Oregon; Howard Elliot Johnson Fuels and Vegetation Management Project EIS AGENCY: Forest Service, USDA... Prineville, Oregon. The project area includes National Forest and Bureau of Land Management System lands in... effects will take place. The Howard Elliot Johnson Fuels and Vegetation Management Project decision and...
75 FR 71638 - Safety Zone; Fleet Week Maritime Festival, Pier 66, Elliot Bay, Seattle, WA
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-24
...-AA00 Safety Zone; Fleet Week Maritime Festival, Pier 66, Elliot Bay, Seattle, WA AGENCY: Coast Guard...) entitled ``Safety Zone; Fleet Week Maritime Festival, Pier 66, Elliot Bay, Seattle, WA'' (Docket number...; Fleet Week Maritime Festival, Pier 66, Elliott Bay, Seattle, Washington. (a) Location. The following...
Competition and Performance: More Facts, More Understanding? Comment on Murayama and Elliot (2012)
ERIC Educational Resources Information Center
Johnson, David W.; Johnson, Roger T.; Roseth, Cary J.
2012-01-01
Murayama and Elliot (2012) made a significant contribution to the literature on competition by presenting the results of 2 meta-analyses and 3 primary studies on the relation between competition and performance. Murayama and Elliot established that in general, there is no relationship between competition and performance. They then made the case…
Pluto's Atmospheric Figure from the P131.1 Stellar Occultation
NASA Astrophysics Data System (ADS)
Person, M. J.; Elliot, J. L.; Clancy, K. B.; Kern, S. D.; Salyk, C. V.; Tholen, D. J.; Pasachoff, J. M.; Babcock, B. A.; Souza, S. P.; Ticehurst, D. R.; Hall, D.; Roberts, L. C., Jr.; Bosh, A. S.; Buie, M. W.; Dunham, E. W.; Olkin, C. B.; Taylor, B.; Levine, S. E.; Eikenberry, S. S.; Moon, D.-S.; Osip, D. J.
2003-05-01
The stellar occultation by Pluto of the 15th magnitude star designated P131.1 (McDonald and Elliot, AJ, 119, 1999) on 2002 August 21 (UT) provided the first significant chance to compare Pluto's atmospheric structure to that determined from the 1988 occultation of P8 (Millis, et al., Icarus, 105, 282). The P131.1 occultation was observed from several stations in Hawaii and the western United States (Elliot et al., Nature, in press, 2003). Numerous occultation chords were obtained enabling us to examine Pluto's atmospheric figure. The light curves from the observations were analyzed together in the occultation coordinate system of Elliot et al., (AJ, 106, 2544). The Mauna Kea and Lick datasets straddle the center of Pluto's figure, providing strong constraints on model fits to cross sections of the atmospheric shape. In 1988, Millis (et al., Icarus, 105, 282) did not report any deviation from sphericity in Pluto's atmospheric figure. From the 2002 data, Pluto;s isobars at the radii probed by the occultation ( 1250 km) appear to be distorted from a circular cross-section. Least-squares fits to this cross-section by elliptical models reveal ellipticities in the range 0.05-0.08 although the shape may be more complex than ellipsoidal. The orientation of the distortion appears uncorrelated with Pluto;s rotational axis. Taken at face value, this ellipticity could imply wind speeds of up to twice the sonic speed ( 200 m/s), which would be difficult to explain. Similar distortions have been reported for Triton's atmosphere (Elliot, J. L., et al., Icarus 148, 347). This work has been supported in part by Research Corporation, the Air Force Research Laboratory, NSF, and NASA.
Constraints on Pluto's Hazes from 2-Color Occultation Lightcurves
NASA Astrophysics Data System (ADS)
Hartig, Kara; Barry, T.; Carriazo, C. Y.; Cole, A.; Gault, D.; Giles, B.; Giles, D.; Hill, K. M.; Howell, R. R.; Hudson, G.; Loader, B.; Mackie, J. A.; Olkin, C. B.; Rannou, P.; Regester, J.; Resnick, A.; Rodgers, T.; Sicardy, B.; Skrutskie, M. F.; Verbiscer, A. J.; Wasserman, L. H.; Watson, C. R.; Young, E. F.; Young, L. A.; Buie, M. W.; Nelson, M.
2015-11-01
The controversial question of aerosols in Pluto's atmosphere first arose in 1988, when features in a Pluto occultation lightcurve were alternately attributed to haze opacity (Elliot et al. 1989) or a thermal inversion (Eshleman 1989). A stellar occultation by Pluto in 2002 was observed from several telescopes on Mauna Kea in wavelengths ranging from R- to K-bands (Elliot et al. 2003). This event provided compelling evidence for haze on Pluto, since the mid-event baseline levels were systematically higher at longer wavelengths (as expected if there were an opacity source that scattered more effectively at shorter wavelengths). However, subsequent occultations in 2007 and 2011 showed no significant differences between visible and IR lightcurves (Young et al. 2011).The question of haze on Pluto was definitively answered by direct imaging of forward-scattering aerosols by the New Horizons spacecraft on 14-JUL-2015. We report on results of a bright stellar occultation which we observed on 29-JUN-2015 in B- and H-bands from both grazing and central sites. As in 2007 and 2011, we see no evidence for wavelength-dependent extinction. We will present an analysis of haze parameters (particle sizes, number density profiles, and fractal aggregations), constraining models of haze distribution to those consistent with and to those ruled out by the occultation lightcurves and the New Horizons imaging.References:Elliot, J.L., et al., "Pluto's Atmosphere." Icarus 77, 148-170 (1989)Eshleman, V.R., "Pluto's Atmosphere: Models based on refraction, inversion, and vapor pressure equilibrium." Icarus 80 439-443 (1989)Elliot, J.L., et al., "The recent expansion of Pluto's atmosphere." Nature 424 165-168 (2003)Young, E.F., et al., "Search for Pluto's aerosols: simultaneous IR and visible stellar occultation observations." EPSC-DPS Joint Meeting 2011, held 2-7 October 2011 in Nantes, France (2011)
ERIC Educational Resources Information Center
Chen, Lung Hung; Wu, Chia-Huei; Kee, Ying Hwa; Lin, Meng-Shyan; Shui, Shang-Hsueh
2009-01-01
In this study, the hierarchical model of achievement motivation [Elliot, A. J. (1997). Integrating the "classic" and "contemporary" approaches to achievement motivation: A hierarchical model of approach and avoidance achievement motivation. In P. Pintrich & M. Maehr (Eds.), "Advances in motivation and achievement"…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-30
... DEPARTMENT OF AGRICULTURE Forest Service Ochoco National Forest, Lookout Mountain Ranger District; Oregon; Howard Elliot Johnson Fuels and Vegetation Management Project EIS Correction In notice document 2010-17803 beginning on page 43138 in the issue of Friday, July 23, 2010 make the following correction...
DOT National Transportation Integrated Search
1979-01-01
The Giles and Elliot discriminant functions diagnosing sex and race from cranial measurements were tested on a series of forensically examined crania of known sex and race. Of 52 crania of known sex, 46 (88%) were correctly diagnosed. Racial diagnose...
Occultation Lightcurves for Selected Pluto Volatile Transport Models
NASA Astrophysics Data System (ADS)
Young, L. A.
2004-11-01
The stellar occultations by Pluto in 1988 and 2002 are demonstrably sensitive to changes in Pluto's atmosphere near one microbar (Elliot and Young 1992, AJ 103, 991; Elliot et al. 2003, Nature 424, 165; Sicardy 2003, Nature 424, 168). However, Pluto volatile-transport models focus on the changes in the atmospheric pressure at the surface (e.g., Hansen and Paige 1996, Icarus 20, 247; Stansberry and Yelle 1999, Icarus 141, 299). What's lacking is a connection between predictions about the surface properties and either temperature and pressure profiles measurable from stellar occultations, or the occultation light curve morphology itself. Radiative-conductive models can illuminate this connection. I will illustrate how Pluto's changing surface pressure, temperature, and heliocentric distance may affect occultation light curves for a selection of existing volatile transport models. Changes in the light curve include the presence or absence of an observable ``kink'' (or departure from an isothermal light curve), the appearance of non-zero minimum flux levels, and the detectability of the solid surface. These light curves can serve as examples of what we may anticipate during the upcoming Pluto occultation season, as Pluto crosses the galactic plane.
NASA Astrophysics Data System (ADS)
Rahman, M. S.; Hoover, F. A.; Bowling, L. C.
2017-12-01
Elliot Ditch is an urban/urbanizing watershed located in the city of Lafayette, IN, USA. The city continues to struggle with stormwater management and combined sewer overflow (CSO) events. Several best-management practices (BMP) such as rain gardens, green roofs, and bioswales have been implemented in the watershed, but the level of adoption needed to achieve meaningful impact is currently unknown. This study's goal is to determine what level of BMP coverage is needed to impact water quality, whether meaningful impact is determined by achieving water quality targets or statistical significance. A power analysis was performed using water quality data for total suspended solids (TSS), E.coli, total phosphorus (TP) and nitrate (NO3-N) from Elliot Ditch from 2011 to 2015. The minimum detectable difference (MDD) was calculated as the percent reduction in load needed to detect a significant change in the watershed. The water quality targets were proposed by stakeholders as part of a watershed management planning process. The water quality targets and the MDD percentages were then compared to simulated load reductions due to BMP implementation using the Long-term Hydrologic Impact Assessment-Low Impact Development (LTHIA-LID) model. Seven baseline model scenarios were simulated by implementing the maximum number of each of six types of BMPs (rain barrels, permeable patios, green roofs, grassed swale/bioswales, bioretention/rain gardens, and porous pavement), as well as all the practices combined in the watershed. These provide the baseline for targeted implementation scenarios designed to determine if statistically and physically meaningful load reductions can be achieved through BMP implementation alone.
Validity and Reliability of Trichotomous Achievement Goal Scale
ERIC Educational Resources Information Center
Ilker, Gokce Erturan; Arslan, Yunus; Demirhan, Giyasettin
2011-01-01
The Trichotomous Achievement Goal Scale was developed by Agbuga and Xiang (2008) by including selected items from the scales of Duda and Nicholls (1992), Elliot (1999), and Elliot and Church (1997) and adapting them into Turkish. The scale consists of 18 items, and students rated each item on a 7-point Likert scale. To ascertain the validity and…
78 FR 38582 - Safety Zones; Multiple Firework Displays in Captain of the Port, Puget Sound Zone
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-27
... Holmes Harbor, Elliot Bay Pier 90, and Southeast of Alki Point Light (approx. 1500 yds.) for various... from coming too close to the fireworks display and the associated hazards. C. Discussion of the Final... Elliot Bay, Pier 90; and Tuxedo and Tennis Shoes Event on July 20, 2013, near Alki Point Light. All...
Publication Bias in "Red, Rank, and Romance in Women Viewing Men," by Elliot et al. (2010)
ERIC Educational Resources Information Center
Francis, Gregory
2013-01-01
Elliot et al. (2010) reported multiple experimental findings that the color red modified women's ratings of attractiveness, sexual desirability, and status of a photographed man. An analysis of the reported statistics of these studies indicates that the experiments lack sufficient power to support these claims. Given the power of the experiments,…
ERIC Educational Resources Information Center
Nolan, Lucinda A.
2009-01-01
An impetus of the Religious Education Association (REA) toward becoming an actively intercultural and interreligious agency emerged in the third decade of its existence. This article explores this period through an examination of the involvement of the REA members, Father John Elliot Ross and others (1884-1946) in a series of seminars conducted by…
ERIC Educational Resources Information Center
Barkoukis, Vassilis; Ntoumanis, Nikos; Nikitaras, Nikitas
2007-01-01
Background: It is commonly assumed that there is conceptual equivalence between the task and ego achievement goals proposed by Nicholl's (1989) dichotomous achievement goal theory (Nicholls, 1989), and the mastery and performance approach goals advanced by Elliot's (1997) trichotomous hierarchical model of approach and avoidance achievement…
Comparing Three Models of Achievement Goals: Goal Orientations, Goal Standards, and Goal Complexes
ERIC Educational Resources Information Center
Senko, Corwin; Tropiano, Katie L.
2016-01-01
Achievement goal theory (Dweck, 1986) initially characterized mastery goals and performance goals as opposites in a good-bad dualism of student motivation. A later revision (Harackiewicz, Barron, & Elliot, 1998) contended that both goals can provide benefits and be pursued together. Perhaps both frameworks are correct: Their contrasting views…
ERIC Educational Resources Information Center
Teixeira dos Santos, Regina Antunes; Hentschke, Liane
2010-01-01
In academic education, undergraduate students develop musical knowledge through the preparation of a repertoire within the western classical music tradition during a certain period of formal music practice. During the practice, the student makes choices and deals with personal strategies that assume forms of thinking and, therefore, differentiated…
Motivational Influences of Using Peer Evaluation in Problem-Based Learning in Medical Education
ERIC Educational Resources Information Center
Abercrombie, Sara; Parkes, Jay; McCarty, Teresita
2015-01-01
This study investigates the ways in which medical students' achievement goal orientations (AGO) affect their perceptions of learning and actual learning from an online problem-based learning environment, Calibrated Peer Review™. First, the tenability of a four-factor model (Elliot & McGregor, 2001) of AGO was tested with data collected from…
Sustaining the US Air Force’s Force Support Career Field through Officer Workforce Planning
2012-07-01
1983, cited in Barney, 1991, p. 101. 33 Afiouni, 2007, p. 125. 34 Barney, 1991; Collis & Montgomery, 1995, cited in Elliot ,p. 48. 35 Kaplan & Norton... Elliot , Hamish G.H., “SHRM Best-Practices & Sustainable Competitive Advantage: A Resource-Based View,” The Graduate Management Review, pp. 43-57...and Social Sciences, 2007. Hudson, W., Intellectual Capital: How to Build It, Enhance It, Use It, New York: John Wiley, 1993. Kaplan , R.S
Achievement goals as mediators of the relationship between competence beliefs and test anxiety.
Putwain, David W; Symes, Wendy
2012-06-01
Previous work suggests that the expectation of failure is related to higher test anxiety and achievement goals grounded in a fear of failure. To test the hypothesis, based on the work of Elliot and Pekrun (2007), that the relationship between perceived competence and test anxiety is mediated by achievement goal orientations. Self-report data were collected from 275 students in post-compulsory education following courses in A Level Psychology. Competence beliefs were inversely related to the worry and tension components of test anxiety, both directly and indirectly through a performance-avoidance goal orientation. A mastery-avoidance goal orientation offered an indirect route from competence beliefs to worry only. These findings provide partial support for Elliot and Pekrun's (2007) model. Although significant mediating effects were found for mastery-avoidance and performance-avoidance goals, they were small and there may be other mechanisms to account for the relations between competence beliefs and test anxiety. ©2011 The British Psychological Society.
Alonso-Tapia, Jesús; Huertas, Juan A; Ruiz, Miguel A
2010-05-01
In a historical revision of the achievement goal construct, Elliot (2005) recognized that there is little consensus on whether the term "goal" in "achievement goal orientations" (GO) is best represented as an "aim", as an overarching orientation encompassing several "aims", or as a combination of aims and other processes -self-regulation, etc.-. Elliot pointed also that goal theory research provides evidence for different models of GO. As there were no consensus on these issues, we decided to get evidence about the nature and structure of GO, about the role of gender differences in the configuration of such structure, and about relations between GO, expectancies, volitional processes and achievement. A total of 382 university students from different faculties of two public universities of Madrid (Spain) that voluntarily accepted to fill in a questionnaire that assessed different goals, expectancies and self-regulatory processes participated in the study. Scales reliability, confirmatory factor analyses, multiple-group analyses, and correlation and regression analyses were carried out. Results support the trichotomous model of GO, the consideration of GO as a combination of aims and other psychological processes, showed some gender differences and favour the adoption of a multiple goal perspective for explaining students' motivation.
ERIC Educational Resources Information Center
Calisto, George W.
2013-01-01
This study sought to integrate Dweck and Leggett's (1988) self-theories of intelligence model (i.e., the view that intelligence is either fixed and unalterable or changeable through hard work and effort) with Elliot and Dweck's (1988) achievement goal theory, which explains why some people are oriented towards learning and others toward…
The effect of workshop groups on achievement goals and performance in biology: An outcome evaluation
NASA Astrophysics Data System (ADS)
Born, Wendi Kay
This two-year quasi-experiment evaluated the effect of peer-led workshop groups on performance of minority and majority undergraduate biology students in a three-course series and investigated motivational explanations for performance differences. The workshop intervention used was modeled after a program pioneered by Treisman (1992) at the University of California. Majority volunteers randomly assigned to workshops (n = 61) performed between 1/2 and 1 standard deviation better than those assigned to the control group (n = 60; p < .05) in each quarter without spending more time studying. During Quarter 1, workshop minority students (n = 25) showed a pattern of increasing exam performance in comparison to historic control minority students (n = 21), who showed a decreasing pattern (p < .05). Although sex differences in biology performance were a focus of investigation, none were detected. Motivational predictions derived from the hierarchical model of approach and avoidance achievement motivation (Elliot & Church, 1997) were partially supported. Self-report survey measures of achievement goals, modeled after those used by Elliot and colleagues, were requested from all enrolled students. Volunteers (n = 121) reported higher average levels of approach and avoidance goals than nonvolunteers (n = 439; p < .05) and the relationship of goals to performance was moderated by volunteer status. Performance of volunteers was negatively related to avoidance of failure goals (r = .41, p < .01) and unrelated to performance approach goals. Performance of nonvolunteers was unrelated to avoidance of failure goals and positively related to performance approach goals (r = .28, p < .01). Mastery goals were unrelated to performance for all students. Results were inconsistent with Dweck and Leggett's (1988) theory of mastery vs. performance orientation, but were similar to results found by Elliot and colleagues. Contrary to hypotheses, motivational goals did not mediate performance for any group of students. Results suggest that challenge interventions can be highly beneficial for both majority and minority participants and that institutions can promote excellence by incorporating workshop programs like the one described here. These interventions have been shown to be more effective and cost less than remedial interventions.
2013-06-01
of civilizations. However, with the 1969 triumphal success of the Apollo 11 mission, the dynamics at the core of the Exploration Model shifted...AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11 . SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT...Valentin Bondarenko Training (31 Oct 64) Theodore Freeman Training (28 Feb 66) Elliot See, Charles Bassett Apollo 1 (27 Jan 67) Gus Grissom
The Software Maintenance Spectrum: Using More than Just New Toys
2000-04-01
Deitel & Deitel, How to Program Java, Prentice Hall, Upper Saddle River, NJ, 1998. Bjarne Stroustrup, The C++ Programming Language, ATT Bell Labs, New... to Program Java, Prentice Hall, Upper Saddle River, NJ, 1998. Dershem, Herbert L and Michael J. Jipping, Programming Languages: Structures and Models...Chikofsky, Elliot and James Cross. Reverse Engineering and Design Recovery: A Taxonomy. IEEE Software, 7(1):13-17 (Jan 1990). Deitel & Deitel, How
ERIC Educational Resources Information Center
Kuroda, Yuji; Sakurai, Shigeo
2011-01-01
This longitudinal study investigated whether depression among early adolescents (aged 12-14 years, N = 116; 65 girls) can be predicted by interactions between social goal orientations and interpersonal stress. Based on Kuroda and Sakurai (2001), this study applied Elliot and Harackiewicz's (1996) trichotomous framework of achievement goals to…
The first megatheropod tracks from the Lower Jurassic upper Elliot Formation, Karoo Basin, Lesotho
Bordy, E. M.; Abrahams, M.; Knoll, F.; McPhee, B. W.
2017-01-01
A palaeosurface with one megatheropod trackway and several theropod tracks and trackways from the Lower Jurassic upper Elliot Formation (Stormberg Group, Karoo Supergroup) in western Lesotho is described. The majority of the theropod tracks are referable to either Eubrontes or Kayentapus based on their morphological characteristics. The larger megatheropod tracks are 57 cm long and have no Southern Hemisphere equivalent. Morphologically, they are more similar to the Early Jurassic Kayentapus, as well as the much younger Upper Cretaceous ichnogenus Irenesauripus, than to other contemporaneous ichnogenera in southern Africa. Herein they have been placed within the ichnogenus Kayentapus and described as a new ichnospecies (Kayentapus ambrokholohali). The tracks are preserved on ripple marked, very fine-grained sandstone of the Lower Jurassic upper Elliot Formation, and thus were made after the end-Triassic mass extinction event (ETE). This new megatheropod trackway site marks the first occurrence of very large carnivorous dinosaurs (estimated body length >8–9 meters) in the Early Jurassic of southern Gondwana, an evolutionary strategy that was repeatedly pursued and amplified in the following ~135 million years, until the next major biotic crisis at the end-Cretaceous. PMID:29069093
ERIC Educational Resources Information Center
O'Brien, Nancy, Ed.
The articles in this paper explore the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical research applications. Titles of the papers and their authors are as follows: (1) "Task Dynamic Coordination of the Speech Articulators: A Preliminary Model" (Elliot Saltzman); (2) "Some Observations…
Hydrodynamic and Salinity Transport Modeling of the Morganza to the Gulf of Mexico Study Area
2013-08-01
this report are not to be used for advertising, publication, or promotional purposes. Citation of trade names does not constitute an official...165 Figure 118. NAFTA Structure...Canal East (S-3) -3.66 (-12) 17.07 (56) 0 N/A N/A Open Open Open Elliot Jones Canal (S-4) -2.44 (-8) 6.10 (20) 0 N/A N/A Open Open Open NAFTA (S-5
The Structure of Triton's Lower Atmosphere
NASA Astrophysics Data System (ADS)
Bosh, Amanda
1995-07-01
With the occultation of Tr148 (McDonald and Elliot, submitted) in August 1995, we have the opportunity to distinguish between two competing models for Triton's lower atmosphere (Tyler et al. 1989; Strobel and Summers 1994). Additionally, we will be acquiring data on an atmosphere that has been predicted to be changing quite rapidly (Hansen and Paige 1992; Spencer and Moore 1992). High quality occultation data sets are crucial for testing these theories and establishing the changing state of Triton's atmosphere.
Mesoscale Characteristics and the Role of Deformation on Ocean Dynamics
1991-06-05
34 Thilus, 41 (A), 416-435, 1989. 3. "Ring Evolution in General Circulation Models from Path Analysis ," J. Geo- phys. Res., 95(C10), 18057-18073, 1990...Loop Curren: (up to three in I year [Elliot,. by Vaukovih and Crt, ,tman (19861. Analysis of drifter data 19821). Lewi.a and Kirwan (1983. 19871...seventh drifter. the Lagrangian data sets along with an analysis and anterpre- number 3354. was entrained in the Loop Current at the tim. tation of
NASA Astrophysics Data System (ADS)
Sciscio, Lara; Bordy, Emese M.
2016-07-01
The Triassic-Jurassic boundary marks a global faunal turnover event that is generally considered as the third largest of five major biological crises in the Phanerozoic geological record of Earth. Determining the controlling factors of this event and their relative contributions to the biotic turnover associated with it is on-going globally. The Upper Triassic and Lower Jurassic rock record of southern Africa presents a unique opportunity for better constraining how and why the biosphere was affected at this time not only because the succession is richly fossiliferous, but also because it contains important palaeoenvironmental clues. Using mainly sedimentary geochemical proxies (i.e., major, trace and rare earth elements), our study is the first quantitative assessment of the palaeoclimatic conditions during the deposition of the Elliot Formation, a continental red bed succession that straddles the Triassic-Jurassic boundary in southern Africa. Employing clay mineralogy as well as the indices of chemical alteration and compositional variability, our results confirm earlier qualitative sedimentological studies and indicate that the deposition of the Upper Triassic and Lower Jurassic Elliot Formation occurred under increasingly dry environmental conditions that inhibited chemical weathering in this southern part of Pangea. Moreover, the study questions the universal validity of those studies that suggest a sudden increase in humidity for the Lower Jurassic record and supports predictions of long-term global warming after continental flood basalt emplacement.
Reconnaissance for trace metals in bed sediment, Wright Patman Lake, near Texarkana, Texas
McKee, Paul W.
2001-01-01
Many contaminants can be introduced into the environment by urban and industrial activities. The drainage area of Wright Patman Lake is influenced by these activities. Among the contaminants associated with urban and industrial activities are trace metals such as arsenic, lead, mercury, and zinc. These contaminants are relatively insoluble in water and commonly are found in stream, lake, and reservoir bottom sediment, especially the clays and silts within the sediment.Wright Patman Lake serves as the major potable water supply for the city of Texarkana and surrounding communities. Texarkana, located in the northeastern corner of Texas and the southwestern corner of Arkansas, had a population of about 56,000 in 1998, which reflects an increase of about 3.4 percent from the 1990 census (Ramos, 1999). Texarkana Water Utilities, which manages the water-treatment facilities for Texarkana, proposes to dredge the lake bed near the water intake in the Elliot Creek arm of Wright Patman Lake. It is possible that arsenic, lead, mercury, and other trace metals might be released into the water if the bed sediment is disturbed. Bed sediment in the Elliot Creek arm of the lake, in particular, could contain trace metals because of its proximity to Red River Army Depot and because industrial land use is prevalent in the headwaters of Elliot Creek.The U.S. Geological Survey (USGS), in cooperation with Reconnaissance for Trace Metals in Bed Sediment, Wright Patman Lake, Near Texarkana, Texas In cooperation with the Texarkana Water Utilities conducted a reconnaissance of Wright Patman Lake to collect bed-sediment samples for analysis of trace metals. This report presents trace metal concentrations in bed-sediment samples collected at six sites along the Elliot Creek arm of the lake, one site each in two adjacent arms, and one site near the dam on June 16, 1999 (fig. 1). One bed-sediment sample was collected at each of the nine sites, and one sediment core was collected at each of two of the sites. Trace metal concentrations are compared to sediment-quality guidelines for the protection of aquatic life and to screening levels based on historical trace metal concentrations in bed sediment of Texas reservoirs.
76 FR 6111 - Disclosure of Payments by Resource Extraction Issuers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-03
... Counsel, Division of Corporation Finance, or Elliot Staffin, Special Counsel in the Office of International Corporate Finance, Division of Corporation Finance, at (202) 551-3290, U.S. Securities and...
NASA Astrophysics Data System (ADS)
Thomas-Osip, J. E.; Elliot, J. L.; Clancy, K. B.
2002-12-01
Multi-wavelength observations of the occultation of P131.1 by Pluto (see Elliot et al., this conference) allow for a re-examination of the possibility of the existence of haze in Pluto's atmosphere. Models of the extinction efficiency of haze particles as a function of wavelength are being used investigate the potential for the existence of haze in the 2002 Pluto atmosphere. The existence of a haze layer in Pluto's atmosphere was postulated to explain the abrupt change in slope seen in the light curve of the 1988 stellar occultation by Pluto (Elliot and Young 1992, AJ, 103, 991). An alternative explanation (Hubbard et al. 1990, Icarus, 84, 1) includes a steep thermal gradient near the surface instead of, or in addition to, a haze layer. Modeling of the growth and sedimentation of photo-chemically produced spherical aerosols (Stansberry et al. 1989, Geophys. Res. Let., 16, 1221) suggested that an appropriate production rate is not sufficient to produce the opacity necessary to account for change in slope found in the 1988 light curve, if it were due solely to spherical particle haze extinction. Recent studies (see for example, Rannou et al. 1995, Icarus, 188, 355 and Thomas-Osip et al. 2002, Icarus, submitted) have shown that it is likely that photochemical hazes on Titan are aggregate in nature. Fractal aggregate particles can have larger extinction efficiencies than equivalent mass spheres of the same material (Rannou et al. 1999, Planet. Space Sci., 47,385). We are, therefore, also re-examining the effect of a haze with an aggregate morphology on modeling of the 1988 occultation observations. This research has been supported in part by NSF Grant AST-0073447 and NASA Grant NAG5-10444.
31. Panoramic shot, Huber Breaker (left), Retail Coal Storage Bins ...
31. Panoramic shot, Huber Breaker (left), Retail Coal Storage Bins (center), Boney Elevator (right) Photographs taken by Joseph E.B. Elliot - Huber Coal Breaker, 101 South Main Street, Ashley, Luzerne County, PA
Further Evidence for Increasing Pressure and a Non-spherical Shape in Triton's Atmosphere
NASA Astrophysics Data System (ADS)
Person, M. J.; Elliot, J. L.; McDonald, S. W.; Buie, M. W.; Dunham, E. W.; Millis, R. L.; Nye, R. A.; Olkin, C. B.; Wasserman, L. H.; Young, L. A.; Hubbard, W. B.; Hill, R.; Reitsema, H. J.; Pasachoff, J. M.; Babcock, B. A.; McConnochie, T. M.; Stone, R. C.
2000-10-01
An occultation by Triton of a star denoted as Tr176 by McDonald & Elliot (AJ 109, 1352), was observed on 1997 July 18 from various locations in Australia and North America. After an extensive prediction effort, two complete chords of the occultation were recorded by our PCCD portable data systems. These chords were combined with three others recorded by another group (Sicardy et al., BAAS 30, 1107) to provide an overall geometric solution for Triton's atmosphere at the occultation pressure. A simple circular fit to these five chords yielded a half-light radius of 1439 +/- 10 km, however least squares fitting revealed a significant deviation from the simple circular projection of a spherical atmosphere. The best fitting ellipse (a first order deviation from the circular solution) yielded a mean radius of 1440 +/- 6 km and an ellipticity of 0.040 +/- 0.003. To further characterize the non-spherical solutions to the geometric fits, methods were developed to analyze the data assuming both circular and elliptical profiles. Circular and elliptically focused light curve models corresponding to the best fitting circular and elliptical geometric solutions were fit to the data. Using these light curve fits, the mean pressure at the 1400 km radius (48 km altitude) derived from all the data was 2.23 +/- 0.28 microbar for the circular model and 2.45 +/- 0.32 microbar for the elliptical model. These pressures agree with those for the Tr180 occultation (which occurred a few months later), so these results are consistent with the conclusions of Elliot et al. (Icarus 143, 425) that Triton's surface pressure has increased from 14.0 microbar at the time of the Voyager encounter to 19.0 microbar in 1997. The mean equivalent-isothermal temperature at 1400 km was 43.6 +/- 3.7 K for the circular model and 42.0 +/- 3.6 K for the elliptical model. Within their calculated errors, the equivalent-isothermal temperatures were the same for all Triton latitudes probed.
ERIC Educational Resources Information Center
Cox, David E.; And Others
1991-01-01
Includes "It's Time to Stop Quibbling over the Acronym" (Cox); "Information Rich--Experience Poor" (Elliot et al.); "Supervised Agricultural Experience Selection Process" (Yokum, Boggs); "Point System" (Fraze, Vaughn); "Urban Diversity Rural Style" (Morgan, Henry); "Nonoccupational Supervised Experience" (Croom); "Reflecting Industry" (Miller);…
22. Greenhouse, south elevation. This winter 2002 view was taken ...
22. Greenhouse, south elevation. This winter 2002 view was taken by Joseph Elliot while conducting photographic documentation of the landscape. - John Bartram House & Garden, Greenhouse, 54th Street & LIndbergh Boulevard, Philadelphia, Philadelphia County, PA
From Universal Access to Universal Proficiency.
ERIC Educational Resources Information Center
Lewis, Anne C.
2003-01-01
Panel of five education experts--Elliot Eisner, John Goodlad, Patricia Graham, Phillip Schlechty, and Warren Simons--answer questions related to recent school reform efforts, such as the No Child Left Behind Act, aimed at achieving universal educational proficiency. (PKP)
Eisner's Aesthetic Theory of Evaluation.
ERIC Educational Resources Information Center
Alexander, H. A.
1986-01-01
The author argues that Elliot Eisner's assumptions underpinning his theory of educational evaluation are problematic. Nevertheless, if the role of artistic thinking in the genesis of educational concept were developed more fully, his approach could bear considerable fruit. (MT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clulow, F.V.; Dave, N.K.; Lim, T.P.
1988-07-01
Radium-226 levels in samples from an inactive U tailings site at Elliot Lake, Ontario, Canada, were: 9140 +/- 500 mBq g-1 dry weight in the substrate; 62 +/- 1 mBq g-1 dry weight in rye, Secale cereale, and less than 3.7 mBq g-1 dry weight in oats, Avena sativa, the dominant species established by revegetation of the tailings; and 117 +/- 7 mBq g-1 dry weight in washed and unwashed black cutworm larvae. Concentration ratios were: vegetation to tailings 0.001-0.007; black cutworms to vegetation 3.6 and black cutworms to tailings 0.01. The values are considered too low to be consideredmore » a hazard to herring gulls, Larus argentatus, which occasionally feed on cutworms.« less
Post optimization paradigm in maximum 3-satisfiability logic programming
NASA Astrophysics Data System (ADS)
Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd
2017-08-01
Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.
33. Coal Fuel Elevator (diagonal in foreground), Fuel Elevator (left), ...
33. Coal Fuel Elevator (diagonal in foreground), Fuel Elevator (left), Fuel Storage Bins (center), and Power Plant (right) Photographs taken by Joseph E.B. Elliot - Huber Coal Breaker, 101 South Main Street, Ashley, Luzerne County, PA
53. Retail Pockets, Looking West, date unknown Historic Photograph, Photogapher ...
53. Retail Pockets, Looking West, date unknown Historic Photograph, Photogapher Unknown; Collection of William Everett, Jr. (Wilkes-Barre, PA), photocopy by Joseph E.B. Elliot - Huber Coal Breaker, 101 South Main Street, Ashley, Luzerne County, PA
The Voices of the Documentarist
ERIC Educational Resources Information Center
Utterback, Ann S.
1977-01-01
Discusses T. S. Elliot's essay, "The Three Voices of Poetry" which conceptualizes the position taken by the poet or creator. Suggests that an examination of documentary film, within the three voices concept, expands the critical framework of the film genre. (MH)
How Jupiter's Ring Was Discovered.
ERIC Educational Resources Information Center
Elliot, James; Kerr, Richard
1985-01-01
"Rings" (by astronomer James Elliot and science writer Richard Kerr) is a nontechnical book about the discovery and exploration of ring systems from the time of Galileo to the era of the Voyager spacecraft. One of this book's chapters is presented. (JN)
NASA Astrophysics Data System (ADS)
Elliot, J. L.; Olkin, C. B.
1997-07-01
In 1993 we began a program for observing stellar occultations by Triton with the following objectives: (1) probe Triton's atmosphere in the microbar pressure region for comparison with models based on Voyager data (e.g. Strobel et al., Icarus 120, 266), (2) investigate the predicted seasonal changes in surface pressure (Spencer & Moore, Icarus 99, 261; Hansen & Paige, Icarus 99, 273) and (3) investigate spatial variability of the atmospheric structure. Observations have been successful for three stars, and the results are descibed by Olkin et al. (Icarus, in press) and Elliot et al. (Science, submitted). A large difference between the observations and models in the pressure and temperature at a radius of 1400 km (about 50 km altitude) may be due to seasonal change or inadequacy of the models. Triton's atmosphere has been found to be highly distorted from spherical symmetry, which has been interpreted as evidence for winds near the sonic velocity ( ~ 140 m s(-1) ). Based on current knowledge of Triton's atmosphere just described, our goals for future investigations of Triton's atmosphere are threefold: (i) map the central-flash with multiple chords in order to understand how Triton's atmosphere is distorted, (ii) obtain a light curve of greater S/N than we have at present in order to better establish Triton's temperature and pressure profiles so that present models based on Voyager data can be improved; and (iii) regularly probe Triton's atmosphere (annually if possible) in order to learn how its pressure changes with time. The prospects for observation of more high-quality Triton occultations are bright for the next three years (McDonald & Elliot, AJ 109, 1352), after which the Neptune system moves away from the galactic plane and the frequency of events diminishes. This work was supported, in part, by NASA Grants NAG5-3940 at MIT and NAG2-1078 at Lowell Observatory.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-28
... Blocked Persons (``SDN List'') of the two individuals and four entities identified in this notice the... the SDN List. Individuals 1. MANYIKA, Elliot, P.O. Box 300, Bindura, Zimbabwe; DOB 30 Jul 1955...
Theme: Focus on Student Teaching.
ERIC Educational Resources Information Center
Agricultural Education Magazine, 1997
1997-01-01
Includes "Student Teaching" (Whittington); "Decision to Become an Agriculture Teacher" (Cherrie); "Residential Student Teaching Experience in Environmental Education" (Bires, Naugle); "Now that I Am Older and Wiser" (Perey, Elliot, Foster); "Student Teaching" (Connors, Mundt); "Positive Experiences and Problems Encountered during Student Teaching"…
NASA Astrophysics Data System (ADS)
Young, Eliot F.; Young, L. A.; Buie, M.
2007-10-01
The size of Pluto has been difficult to measure. Stellar occultations by Pluto have not yet probed altitudes lower than 1198 km, assuming the clear atmosphere model of Elliot, Person and Qu (2003). Differential refraction by Pluto's atmosphere attenuates the light from an occulted star to a level that is indistinguishable from the zero-level baseline long before Pluto's solid surface is a factor. Since Charon has no detectable atmosphere, its radius was well determined from a stellar occultation in 2005 (Gulbis et al. 2006, Sicardy et al. 2006). Combined with the mutual event photometry (Charon transited Pluto every 6.38 days between 1986 through 1992) - for which differential refraction is a negligible effect - the well-known radius of Charon translates into a more accurate radius for Pluto's solid surface. Our preliminary solid radius estimate for Pluto is 1161 km. We will discuss error bars and the correlations of this determination with Pluto albedo maps. We will also discuss the implications for Pluto's thermal profile, surface temperature and pressure, and constraints on the presence of a haze layer. This work is funded by NASA's Planetary Astronomy program. References Elliot, J.L., Person, M.J., & Qu, S. 2003, "Analysis of Stellar Occultation Data. II. Inversion, with Application to Pluto and Triton." AJ, 126, 1041. Gulbis, A.A.S. et al. 2006, "Charon's radius and atmospheric constraints from observations of a stellar occultation." Nature, 49, 48. Sicardy, B. et al. 2006, "Charon's size and an upper limit on its atmosphere from a stellar occultation." Nature, 49, 52.
Life in the Twilight Zone: The Persistence of Myth in Art Education.
ERIC Educational Resources Information Center
Pariser, David
1988-01-01
Focuses on the article by Elliot W. Eisner (1971) in which Eisner identified seven myths held by art educators. Considers which myths are still alive today and the reasons that art education seems doomed to always have myths. (GEA)
61. Picking Floor, Large Pile of Waste Rock and Wood ...
61. Picking Floor, Large Pile of Waste Rock and Wood date unknown Historic Photograph, Photographer Unknown; Collection of William Everett, Jr. (Wilkes-Barre, PA), photocopy by Joseph E.B. Elliot - Huber Coal Breaker, 101 South Main Street, Ashley, Luzerne County, PA
47. Northwest Side of Breaker, Rock Belt Line (foreground), date ...
47. Northwest Side of Breaker, Rock Belt Line (foreground), date unknown Historic Photograph, Photograher Unknown; Collection of William Everett, Jr. (Wilkes-Barre, PA), photocopy by Joseph E.B. Elliot - Huber Coal Breaker, 101 South Main Street, Ashley, Luzerne County, PA
Thylakoid membrane landscape in the sixties: a tribute to Andrew Benson.
Anderson, Jan M
2007-05-01
Prior to the 1960s, the model for the molecular structure of cell membranes consisted of a lipid bilayer held in place by a thin film of electrostatically-associated protein stretched over the bilayer surface: (the Danielli-Davson-Robertson "unit membrane" model). Andrew Benson, an expert in the lipids of chloroplast thylakoid membranes, questioned the relevance of the unit membrane model for biological membranes, especially for thylakoid membranes, instead of emphasizing evidence in favour of hydrophobic interactions of membrane lipids within complementary hydrophobic regions of membrane-spanning proteins. With Elliot Weier, Benson postulated a remarkable subunit lipoprotein monolayer model for thylakoids. Following the advent of freeze fracture microscopy and the fluid lipid-protein mosaic model by Singer and Nicolson, the subunits, membrane-spanning integral proteins, span a dynamic lipid bilayer. Now that high resolution X-ray structures of photosystems I and II are being revealed, the seminal contribution of Andrew Benson can be appreciated.
75 FR 80977 - Disclosure of Payments by Resource Extraction Issuers
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-23
...: Tamara Brightwell, Senior Special Counsel, Division of Corporation Finance, or Elliot Staffin, Special Counsel in the Office of International Corporate Finance, Division of Corporation Finance, at (202) 551... issuers would be required to disclose taxes on corporate profits, corporate income, and production and...
Ellie Mannette: Master of the Steel Drum.
ERIC Educational Resources Information Center
Svaline, J. Marc
2001-01-01
Presents an interview with Elliot ("Ellie") Mannette who has played a major role in the development and application of steel drums. States that he has spent most of his life designing and teaching the steel drums. Covers interview topics and background information on Mannette. (CMK)
Allain, Ronan
2016-01-01
Melanorosaurus is a genus of basal sauropodomorph that currently includes two species from Southern Africa. In this paper, we redescribe the holotype femur of Melanorosaurus thabanensis from the Elliot Formation of Lesotho, as well as associated remains. The stratigraphic position of this taxon is reviewed, and it is clear that it comes from the Lower Elliot Formation being, therefore, Late Triassic in age, and not Early Jurassic as originally described. The knowledge of the anatomy of the basal sauropodomorph of Thabana Morena is enhanced by the description of six new skeletal elements from the type locality. The femur and the ilium from Thabana Morena are diagnostic and characterized by unusual proportions. The first phylogenetic analysis including both this specimen and Melanorosaurus is conducted. This analysis leads to the conclusion that the femur described in the original publication of Melanorosaurus thabanensis can no longer be referred to Melanorosaurus. For these reasons, we hereby create Meroktenos gen. nov. to encompass Meroktenos thabanensis comb. nov. PMID:26855874
75 FR 81957 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-29
... in meters (MSL) Effective Modified Levy County, Florida, and Incorporated Areas Bronson North Ditch... nearest 0.1 meter. ** BFEs to be changed include the listed downstream and upstream BFEs, and include BFEs... upstream of Elliots Run Road. Unnamed Tributary to Shoup Run..... Approximately 400 feet None +1139...
A CBO Paper: Educational Attainment and Compensation of Enlisted Personnel
2004-02-01
of CBO’s National Security Division wrote the paper under the supervision of Deborah Clay- Mendez and J. Michael Gilmore. At CBO, Daniel Frisk...and Carol Frost provided programming assis- tance for the statistical analyses. Nabeel Alsalam, Robert Dennis, Cary Elliot, Seth Giertz, Roger
52. View Looking East from Power Plant Silos, Retail Pockets ...
52. View Looking East from Power Plant Silos, Retail Pockets under Construction, dated 2 October 1956 Historic Photograph, Photographer Unknown; Collection of William Everett, Jr. (Wilkes-Barre, PA), photocopy by Joseph E.B. Elliot - Huber Coal Breaker, 101 South Main Street, Ashley, Luzerne County, PA
ERIC Educational Resources Information Center
Cuero, Kimberley K.; Bonner, Jennifer; Smith, Brittaney; Schwartz, Michelle; Touchstone, Rose; Vela, Yvonne
2008-01-01
Based on Elliot Eisner's notions of multiple forms of representation and Rosenblatt's aesthetic/efferent responses to reading, a teacher educator/researcher had her undergraduate students explore their connections, using aesthetic representations, to a course entitled "Reading Comprehension". Each aesthetic representation revealed the complexities…
Musicing, Materiality, and the Emotional Niche
ERIC Educational Resources Information Center
Krueger, Joel
2015-01-01
Building on Elliot and Silverman's (2015) embodied and enactive approach to musicing, I argue for an extended approach: namely, the idea that music can function as an environmental scaffolding supporting the development of various experiences and embodied practices that would otherwise remain inaccessible. I focus especially on the materiality of…
Status Report on Speech Research, July-December 1993. SR-115/116.
ERIC Educational Resources Information Center
Fowler, Carol A., Ed.
This publication (one of a series) contains 12 articles which report the status and progress of studies on the nature of speech, instruments for its investigation, and practical applications. Articles in the publication are: "Dynamics and Coordinate Systems on Skilled Sensorimotor Activity" (Elliot L. Saltzman); "Speech Motor…
Making Creative Schedules Work in Middle and High Schools
ERIC Educational Resources Information Center
Merenbloom, Elliot Y.; Kalina, Barbara A.
2006-01-01
Today's schools are responding to the pressing need for positive student-teacher relationships that promote successful learning and prevent dropouts and violence. To meet this challenge, many secondary schools are reorganizing around smaller schools or "houses" and structuring longer blocks of learning time. Authors Elliot Y. Merenbloom and…
Runoff Characterization and Variations at McMurdo Station, Antarctica
2014-05-13
Boulevard Arlington, VA 22230 Laura Elliot and Corey Chan Antarctic Support Contract 7400 S. Tucson Way Centennial , CO 08112 Michael Diamond...then along the road and down Hut Point Road. Sub-basin 2 has the largest area and encompasses the majority of the snowfield and the depression above
51. South side of Breaker, Retail Pockets under Construction Historic ...
51. South side of Breaker, Retail Pockets under Construction Historic Photograph, Photographer Unknown Originally taken by Glen Alden Safety Department, 18 August 1954; Collection of William Everett, Jr. (Wilkes-Barre, PA), photocopy by Joseph E.B. Elliot - Huber Coal Breaker, 101 South Main Street, Ashley, Luzerne County, PA
Selecting Supreme Court Justices: A Dialogue
ERIC Educational Resources Information Center
Landman, James H.
2006-01-01
The ABA Division for Public Education asked a panel of experts--Joyce Baugh, Mary Dudziak, Michael Gerhardt, Timothy Johnson, John Maltese, Mark Moller, Jason Roberts, Elliot Slotnick, and David Yalof--to respond to questions about the judicial nomination process. These questions touched on the balance between the president and the Senate, the…
NASA Astrophysics Data System (ADS)
Hwang, Eunju; Kim, Kyung Jae; Roijers, Frank; Choi, Bong Dae
In the centralized polling mode in IEEE 802.16e, a base station (BS) polls mobile stations (MSs) for bandwidth reservation in one of three polling modes; unicast, multicast, or broadcast pollings. In unicast polling, the BS polls each individual MS to allow to transmit a bandwidth request packet. This paper presents an analytical model for the unicast polling of bandwidth request in IEEE 802.16e networks over Gilbert-Elliot error channel. We derive the probability distribution for the delay of bandwidth requests due to wireless transmission errors and find the loss probability of request packets due to finite retransmission attempts. By using the delay distribution and the loss probability, we optimize the number of polling slots within a frame and the maximum retransmission number while satisfying QoS on the total loss probability which combines two losses: packet loss due to the excess of maximum retransmission and delay outage loss due to the maximum tolerable delay bound. In addition, we obtain the utilization of polling slots, which is defined as the ratio of the number of polling slots used for the MS's successful transmission to the total number of polling slots used by the MS over a long run time. Analysis results are shown to well match with simulation results. Numerical results give examples of the optimal number of polling slots within a frame and the optimal maximum retransmission number depending on delay bounds, the number of MSs, and the channel conditions.
Dielectric study of chalcogenide (Se80Te20)94Ge6 glass
NASA Astrophysics Data System (ADS)
Sharma, Neha; Patial, Balbir Singh; Thakur, Nagesh
2018-04-01
In the present study, dielectric characteristics specifically dielectric constant (ɛ'), dielectric loss (ɛ″) and AC conductivity (σAC) have been investigated for chalcogenide (Se80Te20)94Ge6 glass in the frequency range from 1Hz to 1MHz and within the temperature range from 300 K to 380 K. ɛ'(ω) and ɛ″(ω) are found to be frequency and temperature dependent. This behaviour is interpreted on the basis of Guintini's theory of dielectric dispersion. The investigated glass obeys the power law ωs (s<1) and decreases as temperature rises. The obtained results are discussed in terms of the correlation barrier hopping (CBH) model proposed by Elliot.
Adie, James W; Duda, Joan L; Ntoumanis, Nikos
2008-06-01
Grounded in the 2x2 achievement goal framework (Elliot & McGregor, 2001), a model was tested examining the hypothesized relationships between approach and avoidance (mastery and performance) goals, challenge and threat appraisals of sport competition, and positive and negative indices of well-being (i.e., self-esteem, positive, and negative affect). A further aim was to determine the degree to which the cognitive appraisals mediated the relationship between the four achievement goals and the indicators of athletes' welfare. Finally, measurement and structural invariance was tested with respect to gender in the hypothesized model. An alternative model was also estimated specifying self-esteem as an antecedent of the four goals and cognitive appraisals. Four hundred and twenty-four team sport participants (Mage=24.25) responded to a multisection questionnaire. Structural equation modeling analyses provided support for the hypothesized model only. Challenge and threat appraisals partially mediated the relationships observed between mastery-based goals and the well-being indicators. Lastly, the hypothesized model was found to be invariant across gender.
From Schools to Community Learning Centers: A Program Evaluation of a School Reform Process
ERIC Educational Resources Information Center
Magolda, Peter; Ebben, Kelsey
2007-01-01
This manuscript reports on a program evaluation of a school reform initiative conducted in an Ohio city. The paper describes, interprets, and evaluates this reform process aimed at transforming schools into community learning centers. The manuscript also describes and analyzes the initiative's program evaluation process. Elliot Eisner's [(1998).…
Reliability of Grading High School Work in English
ERIC Educational Resources Information Center
Brimi, Hunter M.
2011-01-01
This research replicates the work of Starch and Elliot (1912) by examining the reliability of the grading by English teachers in a single school district. Ninety high school teachers graded the same student paper following professional development sessions in which they were trained to use NWREL's "6+1 Traits of Writing." These participants had…
33 CFR 165.779 - Regulated Navigation Area; Columbus Day Weekend, Biscayne Bay, Miami, FL.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Rickenbacker Causeway Bridge and Coon Point, Elliot Key contained within an imaginary line connecting the...″ N, 80°12′06″ W; thence west to Point 6 in position 25°30′00″ N, 80°13′17″ W; thence northwest to...
33 CFR 165.779 - Regulated Navigation Area; Columbus Day Weekend, Biscayne Bay, Miami, FL.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Rickenbacker Causeway Bridge and Coon Point, Elliot Key contained within an imaginary line connecting the...″ N, 80°12′06″ W; thence west to Point 6 in position 25°30′00″ N, 80°13′17″ W; thence northwest to...
ERIC Educational Resources Information Center
Traylor, Scott
2009-01-01
This article presents an interview with Cathleen Norris and Elliot Soloway, both pioneering educators who are defining the future of technology and learning. Norris is a professor in the Department of Technology and Cognition at the University of North Texas. She is also the past president of ISTE and the past president of NECA, the organizing…
Stagecoach Theatre Schools: England's Franchised Musical Theatre Training.
ERIC Educational Resources Information Center
Heinig, Ruth Beall
2001-01-01
Describes how a student at Stagecoach (a private arts school), by securing the lead role in the film "Billy Elliot," encouraged other British boys to enroll in ballet and dance classes as well as Stagecoach Theatre Arts Schools. Present locations and international links for Stagecoach schools. Describes how the Stagecoach schools are run…
A Dialogue in Words and Images between Two Artists Doing Arts-Based Educational Research
ERIC Educational Resources Information Center
Quinn, Robert D.; Calkin, Jamie
2008-01-01
Over ten years ago, Tom Barone and Elliot Eisner (1997) described seven features of existing artistic approaches to educational inquiry. Their chapter dealt primarily with written, prosaic forms of Arts-Based Educational Research, or ABER, particularly educational criticism and narrative storytelling. In their concluding section, Barone and Eisner…
The Effect of Colour on Children's Cognitive Performance
ERIC Educational Resources Information Center
Brooker, Alice; Franklin, Anna
2016-01-01
Background: The presence of red appears to hamper adults' cognitive performance relative to other colours (see Elliot & Maier, 2014, "Ann. Rev. Psychol." 65, 95). Aims and sample: Here, we investigate whether colour affects cognitive performance in 8- and 9-year-olds. Method: Children completed a battery of tasks once in the presence…
ERIC Educational Resources Information Center
Conrad, Clifton F., Ed.; Haworth, Jennifer Grant, Ed.; Lattuca, Lisa R., Ed.
Chapters in this volume provide an introduction to qualitative research in higher education, organizing the discussion around four central themes. Part 1, Situating Ourselves and Our Inquiry, contains: (1) Objectivity in Educational Research (Elliot Eisner); (2) Truth in Trouble (Kenneth Gergen); (3) Beyond Translation: Truth and Rigoberta Menchu…
NASA Astrophysics Data System (ADS)
McPhee, Blair W.; Choiniere, Jonah N.
2016-11-01
It has generally been held that the locomotory habits of sauropodomorph dinosaurs moved in a relatively linear evolutionary progression from bipedal through "semi-bipedal" to the fully quadrupedal gait of Sauropoda. However, there is now a growing appreciation of the range of locomotory strategies practiced amongst contemporaneous taxa of the latest Triassic and earliest Jurassic. Here we present on the anatomy of a hyper-robust basal sauropodomorph ilium from the Late Triassic-Early Jurassic Elliot Formation of South Africa. This element, in addition to highlighting the unexpected range of bauplan diversity throughout basal Sauropodomorpha, also has implications for our understanding of the relevance of "robusticity" to sauropodomorph evolution beyond generalized limb scaling relationships. Possibly representing a unique form of hindlimb stabilization during phases of bipedal locomotion, the autapomorphic morphology of this newly rediscovered ilium provides additional insight into the myriad ways in which basal Sauropodomorpha managed the inherited behavioural and biomechanical challenges of increasing body-size, hyper-herbivory, and a forelimb primarily adapted for use in a bipedal context.
Pluto's Atmosphere from the July 2010 Stellar Occultation
NASA Astrophysics Data System (ADS)
Person, Michael J.; Elliot, J. L.; Bosh, A. S.; Gulbis, A. A. S.; Jensen-Clem, R.; Lockhart, M. F.; Zangari, A. M.; Zuluaga, C. A.; Levine, S. E.; Pasachoff, J. M.; Souza, S. P.; Lu, M.; Malamut, C.; Rojo, P.; Bailyn, C. D.; MacDonald, R. K. D.; Ivarsen, K. M.; Reichart, D. E.; LaCluyze, A. P.; Nysewander, M. C.; Haislip, J. B.
2010-10-01
We have observed the 4 July 2010 stellar occultation by Pluto as part of our program of monitoring Pluto's atmospheric changes over the last decade. Successful observations were obtained from three sites: Cerro Calan and Cerro Tololo, Chile, as well as the HESS-project site (High Energy Stereoscopic System) in southwestern Namibia. Successful telescope apertures ranged from 0.45 m to 1.0 m and resulted in seven occultation light curves for the event from among the three sites. Simultaneous analysis of the seven light curves indicates that Pluto's atmosphere continues to be stable, as the calculated atmospheric radii are consistent with those detected in 2006 (Elliot et al., AJ 134, 1, 2007) and 2007 (Person et al., AJ 136, 1510, 2008), continuing the stability that followed the large pressure increase detected between 1988 (Millis et al., Icarus 105, 282, 1993) and 2002 (Elliot et al., Nature 424, 165, 2003). We will present the overall astrometric solution as well as current profiles for Pluto's upper atmospheric temperature and pressure obtained from inversion of the light curves (Elliot, Person, and Qu, AJ 126, 1041, 2003). This work was supported, in part, by grants NNX10AB27G to MIT, NNX08AO50G to Williams College, and NNH08AI17I to the USNO from NASA's Planetary Astronomy Division. The 0.75-m ATOM (Automatic Telescope for Optical Monitoring) light curve was obtained with the generous assistance of the HESS-project staff, arranged by Stefan Wagner and Marcus Hauser of the University of Heidelberg. The 0.45-m Goto telescope at Cerro Calán National Astronomical Observatory, Universidad de Chile, was donated by the Government of Japan. PROMPT (Panchromatic Robotic Optical Monitoring and Polarimetry Telescopes) observations at Cerro Tololo were made possible by the Robert Martin Ayers Science Fund. Student participation was supported in part by NSF's REU program and NASA's Massachusetts Space Grant.
ERIC Educational Resources Information Center
Ewing, John C.; Clark, Robert W.; Threeton, Mark D.
2014-01-01
Career development events are an important facet of the National FFA organization as well as the teaching and learning segment of the national research agenda for Career and Technical Education (Lambeth, Elliot & Joerger, 2008). Students are often prepared to compete in these events by their FFA advisor. Career development events provide…
A Marine Fisheries Program for the Nation.
ERIC Educational Resources Information Center
Department of Commerce, Washington, DC.
This government publication describing the national plan for marine fisheries is divided into two parts. The first part contains a statement by the Secretary of Commerce, Elliot L. Richardson; the goals for the national marine fisheries plan; a description of the six parts of the plan; and a cost estimate for the program. The goals for the plan…
Salvaging NY's School "Contracts." Policy Briefing No. 4
ERIC Educational Resources Information Center
Meyer, Peter
2008-01-01
The centerpiece of former Governor Elliot Spitzer's education reform agenda was a set of performance agreements between the state and designated needy school districts. Known as Contracts for Excellence, or C4E, these agreements would eventually be linked to over a quarter of the new state aid proposed in the governor's first budget. C4E districts…
Art Education in a World of Cross-Purposes
ERIC Educational Resources Information Center
Hope, Samuel
2005-01-01
This article is adapted from the Handbook of Research and Policy in Art Education. Elliot Eisner and Michael Day (eds.) [c] 2004 by Lawrence Erlbaum and Associates, Mahwah, NJ, and the National Art Education Association. To study art education is to discover and engage a field rich with achievement and promise. On one hand, this comes as no…
ERIC Educational Resources Information Center
Dietiker, Leslie
2015-01-01
Elliot Eisner proposed that educational challenges can be met by applying an artful lens. This article draws from Eisner's proposal to consider the assumptions, values, and vision of mathematics education by theorizing mathematics curriculum as an art form. By conceptualizing mathematics curriculum (both in written and enacted forms) as stories…
A Psychometric Evaluation of Two Achievement Goal Inventories
ERIC Educational Resources Information Center
Donnellan, M. Brent
2008-01-01
The properties of the achievement goal inventories developed by Grant and Dweck (2003) and Elliot and McGregor (2001) were evaluated in two studies with a total of 780 participants. A four-factor specification for the Grant and Dweck inventory did not closely replicate results published in their original report. In contrast, the structure of the…
Achievement Goals as Mediators of the Relationship between Competence Beliefs and Test Anxiety
ERIC Educational Resources Information Center
Putwain, David W.; Symes, Wendy
2012-01-01
Background: Previous work suggests that the expectation of failure is related to higher test anxiety and achievement goals grounded in a fear of failure. Aim: To test the hypothesis, based on the work of Elliot and Pekrun (2007), that the relationship between perceived competence and test anxiety is mediated by achievement goal orientations.…
The Public's View of Agricultural Education: We've Come a Long Way--Or Have We?
ERIC Educational Resources Information Center
Krueger, David E.; And Others
1995-01-01
Includes "We've Come a Long Way--Or Have We?" (Krueger); "If Agricultural Education Were a Coca-Cola" (Doerfert); "Agriculture Is Taught? In High School?" (Elliot); "Let's Tell Our Story" (Davis); "Perception, Reality or Idealism" (Powers, Bull); "Agricultural Education under the Bright Lights" (Foster); and "The Changing Face of Agricultural…
ERIC Educational Resources Information Center
Awofala, Adeneye O. A.; Arigbabu, Abayomi A.; Fatade, Alfred O.; Awofala, Awoyemi A.
2013-01-01
Introduction: The stability of the achievement goal orientation across different contexts has been a source of further research since the new millennium. Through theoretically-driven and empirically-based analyses, this study investigated the psychometric properties of the Elliot and McGregor 2x2 framework for achievement goal questionnaire within…
Exploring Community Philosophy as a Tool for Parental Engagement in a Primary School
ERIC Educational Resources Information Center
Haines Lyon, Charlotte
2015-01-01
In this paper, I will reflect on the initial reconnaissance, action, and reflection cycle of my doctoral research, exploring Community Philosophy as a tool for critical parental engagement in a primary school (Elliot, 1991). I will examine how I reflexively engaged with my influence on participants, which then significantly influenced the framing…
Smith, Alison; Ntoumanis, Nikos; Duda, Joan
2007-12-01
Grounded in self-determination theory (Deci & Ryan, 1985) and the self-concordance model (Sheldon & Elliot, 1999), this study examined the motivational processes underlying goal striving in sport as well as the role of perceived coach autonomy support in the goal process. Structural equation modeling with a sample of 210 British athletes showed that autonomous goal motives positively predicted effort, which, in turn, predicted goal attainment. Goal attainment was positively linked to need satisfaction, which, in turn, predicted psychological well-being. Effort and need satisfaction were found to mediate the associations between autonomous motives and goal attainment and between attainment and well-being, respectively. Controlled motives negatively predicted well-being, and coach autonomy support positively predicted both autonomous motives and need satisfaction. Associations of autonomous motives with effort were not reducible to goal difficulty, goal specificity, or goal efficacy. These findings support the self-concordance model as a framework for further research on goal setting in sport.
ERIC Educational Resources Information Center
Maynard, Brandy R.; Brendel, Kristen E.; Bulanda, Jeffery J; Thompson, Aaron M.; Pigott, Terri D.
2015-01-01
School refusal behavior, affecting between 1% and 5% of school-age children, is a psychosocial problem for students characterized by severe emotional distress and anxiety at the prospect of going to school, leading to difficulties in attending school and, in some cases, significant absences from school (Burke & Silverman, 1987; Elliot, 1999;…
Portrait of seven original Mercury astronauts plus new members
NASA Technical Reports Server (NTRS)
1963-01-01
Portrait of the seven original Mercury astronauts plus new members of the astronaut corps. Seated from left to right are: Gordon Cooper, Gus Grissom, Scott Carpenter, Wally Schirra, John Glenn, Alan Shepard, and Deke Slayton. Standing from left to right are: Edward White, James McDivitt, John Young, Elliot See, Charles Conrad, Frank Borman, Neil Armstrong, Thomas Stafford, and James Lovell.
Collaborative Undergraduate HBCU Student Summer Prostate Cancer Training Program
2011-03-01
Claflin University Maurissa Charles Claflin University Jasmine Elliot Claflin University Kayla Felix Claflin University Jessica Fuller Claflin...University Rachael Woods Claflin University Total Students From Claflin University= 26 Jasmine Addison Voorhees College Brittany Allen Voorhees College...significant to increase the low blood count of white blood counts in cancer patients after receiving chemotherapy. In essence , later studies can be
2010-09-28
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, Roger Elliot with United Space Alliance addresses the attendees at a ceremony being held to commemorate the move from Kennedy's Assembly Refurbishment Facility (ARF) to the Vehicle Assembly Building (VAB) of the Space Shuttle Program's final solid rocket booster structural assembly -- the right-hand forward. The move was postponed because of inclement weather. Photo credit: NASA/Kim Shiflett
ERIC Educational Resources Information Center
Saucier, P. Ryan; McKim, Billy R.; Muller, Joe E.; Kingman, Douglas M.
2014-01-01
Professional development education for teachers is essential to improving teacher retention, program relevance and effectiveness, and the preparation of fully qualified and highly motivated career and technology educators at all career stages (Doerfert, 2011; Lambeth, Elliot, & Joerger, 2008). Furthermore, it is necessary to link industry…
ERIC Educational Resources Information Center
Crosby, James W.
2011-01-01
The "Social Skills Improvement System" (SSIS; Gresham & Elliot, 2008) is designed to assist in the screening and classification of students (ages 8 to 18) who are suspected of having significant social skills deficits, and to offer support in the development of interventions for those found to display significant social skills…
Press Conference with Elliot L. Richardson, Secretary of HEW.
ERIC Educational Resources Information Center
Department of Health, Education, and Welfare, Washington, DC.
Two documents were released to the press on January 18, 1973, by Secretary Richardson, one summarizing his term of office as Secretary of Health, Education, and Welfare, and one reporting on HEW potential for the seventies (SO 005 666, SO 005 699). In an introductory statement prior to the press conference, the question of whether or not we as a…
Relativistic Many-Body Calculations of n=2 States for the Beryllium Isoelectronic Sequence
NASA Astrophysics Data System (ADS)
Safronova, M. S.; Johnson, W. R.; Safronova, U. I.
1996-05-01
Energies of the ten (2l2l') states of ions of the beryllium isoelectronic sequence are determined to second-order in relativistic many-body perturbation theory. Both the second-order Coulomb interaction and the second-order Breit-Coulomb interaction are included. Corrections for the frequency-dependent Breit interaction are taken in account in lowest order only. The effect of the Lamb shift is also estimated and included. Comparisons with other calculations and with experiment are made. Our theoretical results for the 2s-2p_3/2 transitions in U^88+ and Th^86+ (4501.60 eV and 4069.02 eV, resp.) differ only by 0.12 eV for U^88+ and 0.55 eV for Th^86+ from experimental data obtained at the SUPER-EBIT in LLNL.(P. Beiersdorfer, D. Knapp, R.E. Marrs, S.R. Elliot and M.H. Chen, Phys. Rev. Lett. 71), 3939 (1993); P. Beiersdorfer, A. Osterheld, S.R. Elliot, M.H. Chen, D. Knapp, and K. Reed, Phys. Rev. A52, 2693 (1995). Excellent agreement with experimental results for the splitting of ^3 P terms is found.
ERIC Educational Resources Information Center
Levine, Judith R., Ed.; Feist, Stanley C., Ed.
The 17 papers in this compilation were selected from 29 presentations given at the conference. The collection includes the following papers: (1) "Does Classroom Context Affect Examination Performance?" by Debra Elliot, Toni Strand, and David Hothersall; (2) "Accent on Abilities: Empowering the Learner by Integrating Teaching, Learning, &…
1962-10-01
S62-06759 (1962) --- This is the second group of pilot astronauts chosen by the National Aeronautics and Space Administration (NASA). These astronaut pilots are (kneeling left to right) Charles Conrad, Jr., Frank Borman, Neil A. Armstrong, and John W. Young; (standing in the back row - left to right) Elliot M. See, Jr., James A. McDivitt, James A. Lovell, Jr., Edward H. White II, and Thomas P. Stafford.
Cognitive Frames of Reference and Strategic Thinking
1991-04-05
Elliot Jaques and T. 0. Jacobs, whose Stratified Systems Theory (SST) links leadership requirements to organizational functions. SST emphasizes the...reverse if necessary and identify by block number) Using Stratified Systems Theory and the research on expertise as a conceptual framework, this study...Stratified Systems Theory and the research on expertise as a conceptual framework, this study explored the differences in the structure and content of the
Evolution of the concept and practice of mitral valve repair
Tchantchaleishvili, Vakhtang; Rajab, Taufiek K.
2015-01-01
The first successful mitral valve repair was performed by Elliot Cutler at Brigham and Women’s Hospital in 1923. Subsequent evolution in the surgical techniques as well as multi-disciplinary cooperation between cardiac surgeons, cardiologists and cardiac anesthesiologists has resulted in excellent outcomes. In spite of this, the etiology of mitral valve pathology ultimately determines the outcome of mitral valve repair. PMID:26309840
ERIC Educational Resources Information Center
Birkett, Michelle; Espelage, Dorothy L.; Koenig, Brian
2009-01-01
Lesbian, gay, and bisexual students (LGB) and those questioning their sexual orientation are often at great risk for negative outcomes like depression, suicidality, drug use, and school difficulties (Elliot and Kilpatrick, How to Stop Bullying, A KIDSCAPE Guide to Training, 1994; Mufoz-Plaza et al., High Sch J 85:52-63, 2002; Treadway and Yoakam,…
Effect of a Metal Deactivator Fuel Additive on Fuel Deposition in Fuel Atomizers at High Temperature
1992-08-01
ARMY NATICK RD&E CENTER DEVELOPMENT AND ENGINEERING CTR ATTN: SATNC-U ATUN : SATBE-F I NATICK MA 01760-5020 SATBE-FL 10 SATBE-BT 2 DIRECTOR SATBE-TQ 1...SFT (MR MAKRIS) I WASHINGTON DC 20330 SAALC/LDPE (MR ELLIOT) 1 KELLY AIR FORCE BASE TX 78241 CDR US AIR FORCE WRIGHT AERO LAB CDR ATUN : POSF (MR
The Development of Mobile Augmented Reality
2012-01-01
working jointly with NRL, performed a domain analysis ( Gabbard et al., 2002) to create a context for usability engineering effort, performed formative...rectangle to provide a background enabled the fastest user performance ( Gabbard et al., 2007). Tracking the user’s head position relative to the real...thank Yohan Baillot, Reinhold Behringer, Blaine Bell, Dennis Brown, Aaron Bryden, Enylton Coelho, Elliot Cooper-Balis, Deborah Hix, Joseph Gabbard
VIEWIT uses on the wild and scenic upper Missouri River
Dwight K. Araki
1979-01-01
This paper discusses a computer application approach to mapping the scenic boundaries on the Upper Missouri Wild and Scenic River. The approach taken in this effort was the computer program VIEWIT. VIEWIT, for seen area analysis, was developed over an eight-year period prior to 1968, by Elliot L. Amidon and Gary H. Elsner. This is the first attempt by the BLW to...
Combining Outdoor Education and Anishnaabe Culture in a Four-Credit Semester Program in Blind River
ERIC Educational Resources Information Center
Thomson, Alexandra
2011-01-01
This article describes a four-credit semester program at Elliot Lake Secondary School in the late 1990s. This New Trails program is based around physical education and leadership, geography, Native studies, and English credits. The students are outside much of the time. The students become certified in the use of GPS and in map and compass work,…
ERIC Educational Resources Information Center
Association for Education in Journalism and Mass Communication.
The Journalism History section of this collection of conference presentations contains the following 15 papers: "Henry Ford's Newspaper: The 'Dearborn Independent,' 1919-1927" (James C. Foust); "Redefining the News?: Editorial Content and the 'Myth of Origin' Debate in Journalism History" (Elliot King); "'Nonpublicity' and…
33 CFR 100.701 - Special Local Regulations; Marine Events in the Seventh Coast Guard District
Code of Federal Regulations, 2014 CFR
2014-07-01
... the Dinner Key Channel to Biscayne National Park Marker “B” to Cutter Channel Mark “2” to Biscayne National Park Marker “C” to West Featherbead Bank Channel Marker “3” to West Featherbed Bank Channel Marker “5” to Elliot Key Biscayne National Park Anchorage, Miami, Florida no closer than 500 feet from each...
33 CFR 100.701 - Special Local Regulations; Marine Events in the Seventh Coast Guard District
Code of Federal Regulations, 2013 CFR
2013-07-01
... the Dinner Key Channel to Biscayne National Park Marker “B” to Cutter Channel Mark “2” to Biscayne National Park Marker “C” to West Featherbead Bank Channel Marker “3” to West Featherbed Bank Channel Marker “5” to Elliot Key Biscayne National Park Anchorage, Miami, Florida no closer than 500 feet from each...
33 CFR 100.701 - Special Local Regulations; Marine Events in the Seventh Coast Guard District
Code of Federal Regulations, 2012 CFR
2012-07-01
... the Dinner Key Channel to Biscayne National Park Marker “B” to Cutter Channel Mark “2” to Biscayne National Park Marker “C” to West Featherbead Bank Channel Marker “3” to West Featherbed Bank Channel Marker “5” to Elliot Key Biscayne National Park Anchorage, Miami, Florida no closer than 500 feet from each...
Growth of planted ponderosa pine thinned to different stocking levels in northern California
William W. Oliver
1979-01-01
Growth was strongly related to growing stock level (GSL) for 5 years after thinning 20-year-old poles on Site Index50 115 land at the Elliot Ranch Plantation in northern California. Five GSL's-basal areas anticipated when trees average 10 inches d.b.h. or more-ranging from 40 to 160 square feet per acre were tested. Periodic annual increment...
The fissured East Yorkshire Chalk, UK - a 'sustainable' aquifer under stress ?
NASA Astrophysics Data System (ADS)
Elliot, T.; Younger, P. L.; Chadha, D. S.
2003-04-01
The fissured Chalk is an important regional aquifer in East Yorkshire, UK, with a large potential for water supply to the Humberside region and especially the City of Hull. It has been exploited since the end of the 19th Century, but although there are more than a dozen long-established pumping wells in the Chalk these currently abstract only 7% of the total recharge the aquifer receives. The classical notion of ‘safe aquifer yield' equates the quantity of groundwater available for abstraction with the long-term natural recharge to the aquifer. An incautious hydrogeologist might be lead to conclude that this is a secure, under-developed resource. In this case study, the aquifer is shown to be already displaying early symptoms of hydrological stress (eg drought effects, overexploitation), and hydrogeochemical indicators point to further effects of anthropogenic pollution impacts in the unconfined aquifer and both recent and ancient saline intrusion in its semi-confined and confined zones. The hydrochemical evidence clearly reveals the importance both of recent aquifer management decisions and palaeohydrogeology in determining the distribution of water qualities within the aquifer. Waters encountered in the confined aquifer are identified as complex (and potentially dynamic) mixtures between recently recharged waters, modern seawater intrusion, and ancient seawater which entered the aquifer many millennia ago. Elliot, T. Younger, P.L. &Chadha, D.S. (1998) The future sustainability of groundwater resources in East Yorkshire - past and present perspectives. In H. Wheater and C. Kirby (Eds.) Hydrology in a Changing Environment, Vol. II, Proc. British Hydrological Society (BHS) International Conference, 6-10 July 1998, Exeter, UK. pp.21-31. Elliot, T., Chadha, D.S. &Younger, P.L. (2001) Water Quality Impacts and Palaeohydrogeology in the East Yorkshire Chalk Aquifer, UK. Quarterly Journal of Engineering Geology and Hydrogeology, 34(4): 385-398. Younger, P.L., Teutsch, G., Custodio, E., Elliot, T., Manzano, M. &Sauter, M. (2002) Assessments of the sensitivity to climate change of flow and natural water quality in four major carbonate aquifers of Europe. In Hiscock, K.M, Rivett, M.O., Davison, R.M. (Eds.), Sustainable Groundwater Development. Geological Society Special Publication No 193, The Geological Society, London, UK. pp.303-323.
The 2016 NIST Speaker Recognition Evaluation
2017-08-20
The 2016 NIST Speaker Recognition Evaluation Seyed Omid Sadjadi1,∗, Timothée Kheyrkhah1,†, Audrey Tong1, Craig Greenberg1, Douglas Reynolds2, Elliot...recent in an ongoing series of speaker recognition evaluations (SRE) to foster research in ro- bust text-independent speaker recognition, as well as...online evaluation platform, a fixed training data condition, more variability in test segment duration (uni- formly distributed between 10s and 60s
ERIC Educational Resources Information Center
Association for Education in Journalism and Mass Communication.
The Magazine section of this collection of conference presentations contains the following nine papers: "Davids and Goliaths: The Economic Restructuring of the Postwar Magazine Industry, l950-l970" (David Abrahamson); "The Global Economy as Magazine News Story: A Pilot Study in the Framing of News" (Elliot King);…
The Asia-Pacific Rebalance: Impact on U.S. Naval Strategy
2014-03-01
Washington, DC: Elliot School of International Affairs, George Washington University, August 2013), 3–5. 2...72 Robert D. Kaplan , “Center Stage for the Twenty-First Century: Power Plays in the Indian Ocean,” Foreign Affairs 88, no.2 (March/April 2009), 16...Forward, Strengthening Partnerships.” Joint Force Quarterly, no 65 (2012). 64 Kaplan , Robert D. “Center Stage for the Twenty-First Century: Power
DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS
2015-05-29
DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS Elliot Singer and Douglas Reynolds Massachusetts Institute of...development data is assumed to be unavailable. The method is based on a generalization of data whitening used in association with i-vector length...normalization and utilizes a library of whitening transforms trained at system development time using strictly out-of-domain data. The approach is
ERIC Educational Resources Information Center
Hagerstown Junior Coll., MD.
This colloquium book review (occasioned by Andrew Scott's "Pirates of the Cell") contains seven selected readings from popular periodicals and research journals. It is designed to eliminate some of the mental barriers that many have to topics like molecular biology and virology. Included are: (1) "What Is A Virus?" (William D. Elliot); (2) "The…
The Experimental Aspects of Coupling Electrical Energy into a Dense Detonation Wave. Part 1
1982-05-17
DiBona for his assistance on setting up the optical instrumentation currently employed; Drs. E. Zimet and E. Toton for their theoretical discussions of... DiBona 1 600 2nd Street N. Hopkins, MN 55343 Library of Congress Attn: Gift and Exchange Division 4 Washington, DC 20540 Maxwell Laboratories, Inc. Attn: P. Elliot 1 8835 Balboa Avenue San Diego, CA 92123 (3)
ERIC Educational Resources Information Center
Elliot, Andrew J.; Maier, Markus A.
2013-01-01
Francis (2013) tested for and found evidence of publication bias in 1 of the 3 focal relations examined in Elliot et al. (2010), that between red and attractiveness. He then called into question the research as a whole and the field of experimental psychology more generally. Our reply has 3 foci. First, we attend to the bottom line regarding the…
A Clinically Useful Tool to Determine an Effective Snellen Fraction: Details
2009-03-01
spreadsheet permits ready modifications to accommodate other transformations, such as true LogMAR charts like Early Treatment of Diabetic Retinopathy Study...variability, then smaller, genuine changes in a patient’s acuity could be detected reliably (13). According to Elliot and Sheridan (14), the main...repeatable the test is likely to be and therefore the greater the sensitivity with which change can be detected . Westheimer (19) found that the logarithm
Improving the Effectiveness of Speaker Verification Domain Adaptation With Inadequate In-Domain Data
2017-08-20
Improving the Effectiveness of Speaker Verification Domain Adaptation With Inadequate In-Domain Data Bengt J. Borgström1, Elliot Singer1, Douglas...ll.mit.edu.edu, dar@ll.mit.edu, es@ll.mit.edu, omid.sadjadi@nist.gov Abstract This paper addresses speaker verification domain adaptation with...contain speakers with low channel diversity. Existing domain adaptation methods are reviewed, and their shortcomings are discussed. We derive an
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boone, C. T.; Shaw, J. M.; Nembach, H. T.
2015-06-14
We determined the spin-transport properties of Pd and Pt thin films by measuring the increase in ferromagnetic resonance damping due to spin-pumping in ferromagnetic (FM)-nonferromagnetic metal (NM) multilayers with varying NM thicknesses. The increase in damping with NM thickness depends strongly on both the spin- and charge-transport properties of the NM, as modeled by diffusion equations that include both momentum- and spin-scattering parameters. We use the analytical solution to the spin-diffusion equations to obtain spin-diffusion lengths for Pt and Pd. By measuring the dependence of conductivity on NM thickness, we correlate the charge- and spin-transport parameters, and validate the applicabilitymore » of various models for momentum-scattering and spin-scattering rates in these systems: constant, inverse-proportional (Dyakanov-Perel), and linear-proportional (Elliot-Yafet). We confirm previous reports that the spin-scattering time appears to be shorter than the momentum scattering time in Pt, and the Dyakanov-Perel-like model is the best fit to the data.« less
2013-09-30
forcing through an ensemble-based method. The results of our findings were presented at the 2013 American Meteorological Society (AMS) annual meeting...Forcing to the Existing Satellite Observations, 93rd American Meteorological Society Annual Meeting, Austin, Texas, January 5-10, 2013b...Ceburnis, D., Chang, R., Clarke, A., de Leeuw, G., Deane, G., DeMott, P. J., Elliot, S., Facchini, M. C., Fairall, C. W., Hawkins, L., Hu, Y., Hudson , J
2009-05-01
the tunnels collapse many Tok’ra are killed along with Major Mansfield. Ren’al encrypts the symbiote poison formula onto a data crystal but is...the tunnels . SG-1 regroups but more tunnel collapses prevent their escape to the ring room. Their path to the surface is blocked. Back at the...of Tok’ra tunneling crystals which they use to grow new tunnels . Major Carter tries to help Lt. Elliot stay alive. Oiris tells the System Lords she
Status Report on Speech Research.
1987-09-01
CA L B UOP Y ST A N D A R D S 196 3 A MATIOAL BUE U v L’ Status Report on Speech Research SR-91 July-September 1987 Haskins Laboratories New Haven...Elliot Saltzmnan Etienne Colombt Alvin M. Libernman* Donald Shankweiler" Franklin S. Cooper" Isabelle Y . Uiberman" Michael Studdert-Kennedy" Stephen Crain...of phonological segments and reading ability In Italian children), Giuseppe Cossu, Donald ankweiler, Isabelle Y . Liberman,Gui e To -- nard Katz
Efficacy of Economic Sanctions: North Korea and Iran Case Study
2011-03-08
rate, as interpreted by Hufbauer, Schott, and Elliot ( HSE ), remained steady at thirty-four percent just as it had been since World War II. The end of...Germany, France, and Great Britain that diffused US economic power. HSE wrote: “The absence of an overriding global security threat made it harder for the...industrial countries to reconcile their different strategies and priorities for using sanctions in regional trouble spots.”8 In 1985 HSE published
A Search for Satellites of Kuiper Belt Object 55636 from the 2009 October 9 Occultation
NASA Astrophysics Data System (ADS)
Jensen-Clem, Rebecca; Elliot, J. L.; Person, M. J.; Zuluaga, C. A.; Bosh, A. S.; Adams, E. R.; Brothers, T. C.; Gulbis, A. A. S.; Levine, S. E.; Lockhart, M.; Zangari, A. M.; Babcock, B. A.; DuPre, K.; Pasachoff, J. M.; Souza, S. P.; Rosing, W.; Secrest, N.; Bright, L.; Dunham, E. W.; Kakkala, M.; Tilleman, T.; Rapoport, S.; Zambrano-Marin, L.; Wolf, J.; Morzinski, K.
2011-01-01
A world-wide observing campaign of 21 telescopes at 18 sites was organized by Elliot et al. (2010 Nature 465, 897) to observe the 2009 Oct. 9 stellar occultation of 2UCAC 41650964 (UCAC2 magnitude 13.1) by the Kuiper Belt object 55636 (visual magnitude 19.6). Integration times varied between 0.05 seconds at the Vatican Advanced Technology Telescope and 5 seconds at Mauna Kea mid-level. Data from the two sites that successfully observed the occultation (Haleakala and the Mauna Kea mid-level) were analyzed by Elliot et al. (2010) to determine the diameter and albedo of 55636. In this study, we use the entire data set to search for signatures of occultations by nearby satellites. One satellite previously discovered with occultation data is Neptune's moon Larissa, which was detected during Neptune's close approach to a star in 1982 (Reitsema et al. 1982). No satellites are found in this study, and upper limits will be reported on satellite radii within the volume probed (2 x 10-8 of the Hill Sphere). This work was supported, in part, by NASA Grants NNX10AB27G (MIT), NNX08AO50G (Williams College), and NNH08AI17I (USNO-FS) and NSF Grant AST-0406493 (MIT). Student participation was supported in part by NSF's REU program and NASA's Massachusetts Space Grant.
Sahimi, Hani Nabilia Muhd; Chubo, John Keen; Top Mohd Tah, Marina Mohd; Saripuddin, Noor Bahiah; Ab Rahim, Siti Sarah
2018-03-01
Tarsius bancanus borneanus was first reported by Elliot in 1990 which an endemic species that can be found on the Island of Borneo consisting of Sabah and Sarawak of Malaysia, Brunei Darussalam and Kalimantan, Indonesia. This sub-species has been listed as a totally protected animal under the Sarawak Wild Life Protection Ordinance (1998) and vulnerable by the International Union for Conservation of Nature (IUCN). The present study was conducted at Universiti Putra Malaysia Bintulu Campus (UPMKB), Sarawak from October 2014 till March 2015. Through mark and recapture sampling covering an area of 37 ha of secondary forest patches and 7.13 ha of rehabilitated forest, a total of 16 tarsiers were captured using mist nets while one tarsier was recapture. The population density was 38 individuals/km 2 was captured using mist nets in the secondary forest while 28 individuals/km 2 was recorded for the rehabilitated forest. Using the catch per unit effort (net hour) method, the average time for capturing tarsiers in the secondary forest patches was 26.6 net hour per animal and 30.0 net hour per animal in the rehabilitated forest. The presented results provides information on the presence of tarsiers in both the secondary and rehabilitated forests of UPMKB, Sarawak, Malaysia which underlines the conservation value of these forested areas.
Sahimi, Hani Nabilia Muhd; Chubo, John Keen; Top @ Mohd. Tah, Marina Mohd.; Saripuddin, Noor Bahiah; Ab Rahim, Siti Sarah
2018-01-01
Tarsius bancanus borneanus was first reported by Elliot in 1990 which an endemic species that can be found on the Island of Borneo consisting of Sabah and Sarawak of Malaysia, Brunei Darussalam and Kalimantan, Indonesia. This sub-species has been listed as a totally protected animal under the Sarawak Wild Life Protection Ordinance (1998) and vulnerable by the International Union for Conservation of Nature (IUCN). The present study was conducted at Universiti Putra Malaysia Bintulu Campus (UPMKB), Sarawak from October 2014 till March 2015. Through mark and recapture sampling covering an area of 37 ha of secondary forest patches and 7.13 ha of rehabilitated forest, a total of 16 tarsiers were captured using mist nets while one tarsier was recapture. The population density was 38 individuals/km2 was captured using mist nets in the secondary forest while 28 individuals/km2 was recorded for the rehabilitated forest. Using the catch per unit effort (net hour) method, the average time for capturing tarsiers in the secondary forest patches was 26.6 net hour per animal and 30.0 net hour per animal in the rehabilitated forest. The presented results provides information on the presence of tarsiers in both the secondary and rehabilitated forests of UPMKB, Sarawak, Malaysia which underlines the conservation value of these forested areas. PMID:29644021
2011-06-13
flush out the resistance. As the tunnels collapse many Tok’ra are killed along with Major Mansfield. Ren’al encrypts the symbiote poison formula onto a...killed on their way back to the tunnels . SG-1 regroups but more tunnel collapses prevent their escape to the ring room. Their path to the surface is...Col. O’Neill and Teal’c find a set of Tok’ra tunneling crystals which they use to grow new tunnels . Major Carter tries to help Lt. Elliot stay alive
1979-01-01
of his skeleton (16). This man had undergone a prefrontal lobotomy in 1949. His medical and dental records revealed a long history of bruxism coupled...stated that they were often kept awake at night by the sounds of his nocturnal bruxism . In addition to the dental wear, this cranium displayed several...osteolo- gical peculiarities attributable to bruxism . Particularly evident were the robustly developed attachments for the insertion of the masseters and
Two-Electron Correlations in e+H-->e+e+p Near Threshold
NASA Astrophysics Data System (ADS)
Kato, Daiji; Watanabe, Shinichi
1995-03-01
We present an ab initio calculation of the ionization cross section of atomic hydrogen near threshold with precision that compares excellently with the Shah-Elliot-Gilbody experiment [J. Phys. B 20, 3501 (1987)]. This fills the gap between theory and experiment down to 0.1 a.u. above threshold, complementing the recent spectacular work of Bray and Stelbovics [Phys. Rev. Lett. 70, 746 (1993)]. The angular momentum distributions of the secondary electron display an evolution in correlation patterns toward the threshold.
SpaceX CRS-12 "What's on Board?" Science Briefing
2017-08-13
Boy Scouts of America Troop 209 members Andrew Frank, left, and Elliot Lee speak to members of social media in the Kennedy Space Center’s Press Site auditorium. The briefing focused on research planned for launch to the International Space Station. The scientific materials and supplies will be aboard a Dragon spacecraft scheduled for launch from Kennedy’s Launch Complex 39A on Aug. 14 atop a SpaceX Falcon 9 rocket on the company's 12th Commercial Resupply Services mission to the space station.
Armstrong, Asia O.; Armstrong, Amelia J.; Jaine, Fabrice R. A.; Couturier, Lydie I. E.; Fiora, Kym; Uribe-Palomino, Julian; Weeks, Scarla J.; Townsend, Kathy A.; Bennett, Mike B.; Richardson, Anthony J.
2016-01-01
Large tropical and sub-tropical marine animals must meet their energetic requirements in a largely oligotrophic environment. Many planktivorous elasmobranchs, whose thermal ecologies prevent foraging in nutrient-rich polar waters, aggregate seasonally at predictable locations throughout tropical oceans where they are observed feeding. Here we investigate the foraging and oceanographic environment around Lady Elliot Island, a known aggregation site for reef manta rays Manta alfredi in the southern Great Barrier Reef. The foraging behaviour of reef manta rays was analysed in relation to zooplankton populations and local oceanography, and compared to long-term sighting records of reef manta rays from the dive operator on the island. Reef manta rays fed at Lady Elliot Island when zooplankton biomass and abundance were significantly higher than other times. The critical prey density threshold that triggered feeding was 11.2 mg m-3 while zooplankton size had no significant effect on feeding. The community composition and size structure of the zooplankton was similar when reef manta rays were feeding or not, with only the density of zooplankton changing. Higher zooplankton biomass was observed prior to low tide, and long-term (~5 years) sighting data confirmed that more reef manta rays are also observed feeding during this tidal phase than other times. This is the first study to examine prey availability at an aggregation site for reef manta rays and it indicates that they feed in locations and at times of higher zooplankton biomass. PMID:27144343
Armstrong, Asia O; Armstrong, Amelia J; Jaine, Fabrice R A; Couturier, Lydie I E; Fiora, Kym; Uribe-Palomino, Julian; Weeks, Scarla J; Townsend, Kathy A; Bennett, Mike B; Richardson, Anthony J
2016-01-01
Large tropical and sub-tropical marine animals must meet their energetic requirements in a largely oligotrophic environment. Many planktivorous elasmobranchs, whose thermal ecologies prevent foraging in nutrient-rich polar waters, aggregate seasonally at predictable locations throughout tropical oceans where they are observed feeding. Here we investigate the foraging and oceanographic environment around Lady Elliot Island, a known aggregation site for reef manta rays Manta alfredi in the southern Great Barrier Reef. The foraging behaviour of reef manta rays was analysed in relation to zooplankton populations and local oceanography, and compared to long-term sighting records of reef manta rays from the dive operator on the island. Reef manta rays fed at Lady Elliot Island when zooplankton biomass and abundance were significantly higher than other times. The critical prey density threshold that triggered feeding was 11.2 mg m-3 while zooplankton size had no significant effect on feeding. The community composition and size structure of the zooplankton was similar when reef manta rays were feeding or not, with only the density of zooplankton changing. Higher zooplankton biomass was observed prior to low tide, and long-term (~5 years) sighting data confirmed that more reef manta rays are also observed feeding during this tidal phase than other times. This is the first study to examine prey availability at an aggregation site for reef manta rays and it indicates that they feed in locations and at times of higher zooplankton biomass.
Use of the emergency room in Elliot Lake, a rural community of Northern Ontario, Canada.
Harris, L; Bombin, M; Chi, F; DeBortoli, T; Long, J
2004-01-01
There is ample documentation that use of hospital emergency facilities for reasons other than urgencies/emergencies results in clogged services in many urban centers. However, little has been published about similar misuse of emergency rooms/departments in rural and remote areas, where the situation is usually compounded by a scarcity of healthcare professionals. In Canada there is a shortage of physicians in rural and remote areas as a consequence of misdistribution (most physicians staying in southern urban centers after residence), and there is a chronic misuse of facilities meant for urgencies/emergencies to cope with primary healthcare needs. We address the problem in Elliot Lake, a rural Northern Ontario community of 12,000 people. The economy of Elliot Lake was based on uranium mining until the mid-1990s, when it drastically changed to become a center for affordable retirement and recreational tourism. As a consequence, at the present time the proportion of seniors in Elliot Lake doubles the Canadian average. Our objectives are to elucidate the demographics of emergency room (ER) clients and the effect of the elderly population; the nature of ER use; the perceived level of urgency of clients versus health professionals; and possible alternatives offered to non-urgent/emergency visits. This is the first study of the kind in Northern Ontario, a region the size of France. The study, conducted in July 2001, used a prospective survey, completed by patients and attending clinicians at the time of a patient's presentation to the ER of St Joseph's General Hospital. This hospital is staffed by family physicians, a nurse practitioner, and registered nurses (RNs). The catchment area population (town plus surrounding areas) of the hospital is approximately 18,000 people. ER clients were interviewed verbally, and the attending health professionals responded to written questionnaires. Demographics were recorded (age, sex, employment and marital status), as was each client's reason for making an ER visit. Clients were asked if they had a family physician and if they had contacted him/her before visiting the ER, and if they would use another agency to address their health problem. Each client's, nurse's, and physician/nurse practitioner's perceived urgency level was recorded on a scale from 1 (non-urgent) to 5 (extremely urgent/life threatening). The attending physician/nurse practitioner and attending nurse were also asked to recommend appropriate alternatives, in their judgment, to each ER visit. Of a total of 1472 ER cases, 1096 (74.5%) verbal interviews with clients were conducted, as well as 1298 (88.2%) and 1013 (68.8%) questionnaires were completed respectively by attending nurses and physicians/nurse practitioner. The age of the clients was roughly proportional to their cohorts in the catchment area. Males and females were equally represented in the sample. Only 28.8% of the clients contacted their family physicians before visiting the ER, although 80.9% of them had a family physician. The reasons for visiting the ER are mostly typical of a primary care practice in Canada, and ER clients considered 19.4% of their visits non-urgent/non-emergency. In contrast, 45.2% of the physicians/nurse practitioner and 63.7% of the nurses considered the visits non-urgent/non-emergency. To reduce ER misuse, two-thirds of the recommendations by staff were to recruit more family physicians and nurse practitioners, and another one-fifth of the recommendations suggested the creation of a walk-in clinic. Other alternatives, such as the use of a variety of agencies available in town, were minimally recommended by healthcare professionals. The core of the problem identified by this research is that more physicians, nurse practitioners, and other health care professionals are needed in Elliot Lake to provide continuity of care. A new medical school is being created for the region, but the first family physicians from this initiative will only be available in 2012. In the meantime, healthcare professionals may need to take more preventive and educational measures to reduce ER misuse, and the use of other town's agencies, Telehealth, case-management of recurrent clients, and collaboration with local pharmacists need to be maximized. Further research is urgently needed into the effects on health outcomes in rural communities that may result from health services having to function beyond their capacity. Rural health clinicians, communities, researchers, and policy makers must work together to design, implement, and evaluate, both immediate and longer term solutions to the problems identified in this study.
1988-11-16
parts was less than for sole source acquired spare parts see, Alan E. Olsen ; James E. Cunningham; and Donald J. Wilkins, "A Cost-Benefit Analysis of...Washington, D.C. Logistc Management-Institute, MUTT- I 339 I Zusman, Morris; Asher, Norman; Wetzler, Elliot; Bennett, Debbie; Gustaves, Selmer ; Higgins...Acquisition." Masters Thesis, Naval Postgraduate School, 1984. Olsen , Alan E.; Cunningham, James E.; and Wilkins, Donald J. "A Cost-Benefit Analysis of
ERIC Educational Resources Information Center
US House of Representatives, 2005
2005-01-01
The Committee on Government Reform heard testimony from several medical experts who believe that steroid use by young women is an underreported problem, and that a great deal more research and scientific evidence are needed to more accurately quantify its pervasiveness. Dr. Diane Elliot, professor of medicine, Oregon Health and Science University,…
NASA Astrophysics Data System (ADS)
Roe, Henry G.
2006-09-01
The abundance of methane in Pluto's atmosphere has not been remeasured since its initial detection in 1992 by Young et al. (1997). As Pluto recedes from the Sun its atmosphere should eventually collapse and freeze out on the surface, but recent occultation observations (Elliot et al. 2003) show an expansion of the atmosphere rather than contraction. New measurements of Pluto's atmospheric methane abundance are warranted. We obtained high resolution (R=25000) near-infrared spectra of Pluto in July 2006 with NIRSPEC at the W.M. Keck II telescope and will report our initial analysis and results.
1986-08-01
affected area is along Kamehameha Highway at the Makalapa Gate, where 24-hour volumes are about 18,500, and peak a.m. traffic is about 1,500 (Station...AFB NUMBER OF VEHICLES PEAK-HOUR STATION NUMBER LOCATION AM PM 24-HOUR TOTAL 1. 3-C Kamehameha and Nimitz Highways at Elliot...3,222 3:30-4:30 3,898 23,996 4. 5-B Kamehameha Highway at Redford Drive (Makalapa Gate – inbound/outbound) 11:00-12:00 1,532 3:30-4:30
SpaceX CRS-12 "What's on Board?" Science Briefing
2017-08-13
Boy Scouts of America Troop 209 members Andrew Frank, left, Elliot Lee center, and team leader Norman McFarland speak to members of social media in the Kennedy Space Center’s Press Site auditorium. The briefing focused on research planned for launch to the International Space Station. The scientific materials and supplies will be aboard a Dragon spacecraft scheduled for launch from Kennedy’s Launch Complex 39A on Aug. 14 atop a SpaceX Falcon 9 rocket on the company's 12th Commercial Resupply Services mission to the space station.
NASA Astrophysics Data System (ADS)
Shaw, Emily C.; Hamylton, Sarah M.; Phinn, Stuart R.
2016-06-01
The existence of coral reefs is dependent on the production and maintenance of calcium carbonate (CaCO3) framework that is produced through calcification. The net production of CaCO3 will likely decline in the future, from both declining net calcification rates (decreasing calcification and increasing dissolution) and shifts in benthic community composition from calcifying organisms to non-calcifying organisms. Here, we present a framework for hydrochemical studies that allows both declining net calcification rates and changes in benthic community composition to be incorporated into projections of coral reef CaCO3 production. The framework involves upscaling net calcification rates for each benthic community type using mapped proportional cover of the benthic communities. This upscaling process was applied to the reef flats at One Tree and Lady Elliot reefs (Great Barrier Reef) and Shiraho Reef (Okinawa), and compared to existing data. Future CaCO3 budgets were projected for Lady Elliot Reef, predicting a decline of 53 % from the present value by end-century (800 ppm CO2) without any changes to benthic community composition. A further 5.7 % decline in net CaCO3 production is expected for each 10 % decline in calcifier cover, and net dissolution is predicted by end-century if calcifier cover drops below 18 % of the present extent. These results show the combined negative effect of both declining net calcification rates and changing benthic community composition on reefs and the importance of considering both processes for determining future reef CaCO3 production.
Analysis of Archival Low-Resolution Near-Infrared Spectra to Measure Pluto's Atmosphere.
NASA Astrophysics Data System (ADS)
Cook, Jason C.; Young, Leslie; Cruikshank, Dale P.
2017-10-01
First detected via occultation observations, Pluto's atmosphere has changed since its discovery in the 1980s (Brosch & Mendelson, 1985; Elliot et al., 1989). Between the occultations of 1988 and 2002, the surface pressure doubled (Elliot et al., 2003) as Pluto passed through perihelion in 1989. In the years following the 2002 occultation, only a slight increase in the surface pressure has been noted (Young et al. 2013; Olkin et al. 2015). High-resolution spectroscopy has also been used to determine the composition of Pluto's atmosphere. This was first successfully done in 1992 (Young et al., 1997), but no follow up detection was made until 2008 (Lellouch et al. 2009). With a gap in the occultation and spectroscopic records, we have little information on how and when Pluto's atmosphere changed. In order to fill in this gap, we are examining low spectral resolution, high signal-to-noise spectra of Pluto such as Cook et al (2014) presented previously. At this meeting, we will report on additional archive observations from Gemini. These data were taken between 2004 and 2008 using the NIRI+Altair (adaptive optics instrument) and GNIRS instruments. These have resolving powers (λ/Δλ) of ~600 and 6000, respectively. Both data sets cover the K-band spectral range (1.95 to 2.40 μm) where gaseous CH4 has several strong lines, such as the ν3+ν4 Q-branch near 2.317 μm.Funding for this work has been provided by NASA-PATM grant NNX12AK62G.
DoBias, Matthew
2010-07-12
Republicans are raising a stink about the recess appointment of Donald Berwick to lead the CMS, but providers are welcoming the patient-safety crusader to his new post. One reason providers praise Berwick is his focus on patients. "That means a lot. You take into account who I am, what I believe, what's important to me," says Elliot Sussman, left, president and CEO of Lehigh Valley Health Network, Allentown, Pa.
Pluto's Haze from 2002 - 2015: Correlation with the Solar Cycle
NASA Astrophysics Data System (ADS)
Young, Eliot; Klein, Viliam; Hartig, Kara; Resnick, Aaron; Mackie, Jason; Carriazo, Carolina; Watson, Charles; Skrutskie, Michael; Verbiscer, Anne; Nelson, Matthew; Howell, Robert; Wasserman, Lawrence; Hudson, Gordon; Gault, David; Barry, Tony; Sicardy, Bruno; Cole, Andrew; Giles, Barry; Hill, Kym
2017-04-01
Occultations by Pluto were observed 2002, 2007, 2011 and 2015, with each event observed simultaneously in two or more wavelengths. Separate wavelengths allow us to discriminate between haze opacity and refractive effects due to an atmosphere's thermal profile - these two effects are notoriously hard to separate if only single-wavelength lightcurves are available. Of those four occultations, the amount of haze in Pluto's atmosphere was highest in 2002 (Elliot et al. 2003 report an optical depth of 0.11 at 0.73 µm in the zenith direction), but undetectable in the 2007 and 2011 events (we find optical depth upper limits of 0.012 and 0.010 at 0.6 µm). Cheng et al. (2016) report a zenith optical depth of 0.018 at 0.6 µm from the haze profiles seen in New Horizons images. These four data points are correlated with the solar cycle. The 2002 haze detection occurred just after the peak of solar cycle 23, the 2007 and 2011 non-detections occurred during the solar minimum between peaks 23 and 24, and the New Horizons flyby took place just after the peak of solar cycle 24. This suggests that haze production on Pluto (a) is driven by solar UV photons or charged particles, (b) that sources and sinks on Pluto have timescales shorter than a few Earth years, and (c) the haze precursors on Pluto are not produced by Lyman-alpha radiation, because Lyman-alpha output only decreased by about one third in between the cycle 23 and 24 peaks, much less than the observed change in Pluto's haze abundances. References: Elliot, J.L. et al. (2003) Nature, Volume 424, Issue 6945, pp. 165-168.
Structure of Triton's atmosphere from the occultation of Tr176
NASA Astrophysics Data System (ADS)
Sicardy, B.; Mousis, O.; Beisker, W.; Hummel, E.; Hubbard, W. B.; Hill, R.; Reitsema, H. J.; Anderson, P.; Ball, L.; Downs, B.; Hutcheon, S.; Moy, M.; Nielsen, G.; Pink, I.; Walters, R.
1998-09-01
The occultation of the star Tr176 by Triton (Mc Donald & Elliot, AJ 109, 1352, 1995) was observed on 18 July 1997 from three stations in Queensland, Australia (Bundaberg, Ducabrook and Lochington) and one station in Texas, USA (Brownsville). All observations were made with CCD (no filter) and with portable C14 telescopes, except at Bundaberg, where a fixed 48-cm telescope was used. Time sampling rate ranges from 0.33 sec (Bundaberg) to 0.66 sec (Ducabrook and Lochington), with the intermediate value 0.5 sec at Brownsville. Isothermal fits were performed to the lightcurves in order to determine the isothermal temperature, T_iso, and the radius at half-level, R_{1/2}, of Triton's atmosphere (assumed to be composed of pure N_2). Considering the level of noise, we cannot detect any departure from isothermal profiles, and we do not see any deviations from spherical shape. A global fit yields T_iso = 53.7 +/- 2 K and R_{1/2} = 1456 +/- 3 km. We also derive the pressure at 1400 km: p1400 = 1.9 +/- 0.3 mu bars. We will discuss these results and compare them with previous works obtained by Voyager teams from the 1989 observations, and by Olkin et al. (Icarus 129, 178, 1997), who analyze two Triton occultations observed in July 1993 (Tr60) and August 1995 (Tr148). We observe a general increase of pressure at 1400 km, since Olkin et al. derive p1400 = 1.4 +/- 0.1 mu bars from the Tr148 event. This result is actually confirmed by a recent work by Elliot et al., (Nature 393, 765 1998), who note a global warming on Triton, based in particular on a new HST occultation observation in November 1997 (Tr180).
Thorn, K.A.; Cox, L.G.
2009-01-01
The naturally abundant nitrogen in soil and aquatic NOM samples from the International Humic Substances Society has been characterized by solid state CP/MAS 15N NMR. Soil samples include humic and fulvic acids from the Elliot soil, Minnesota Waskish peat and Florida Pahokee peat, as well as the Summit Hill soil humic acid and the Leonardite humic acid. Aquatic samples include Suwannee River humic, fulvic and reverse osmosis isolates, Nordic humic and fulvic acids and Pony Lake fulvic acid. Additionally, Nordic and Suwannee River XAD-4 acids and Suwannee River hydrophobic neutral fractions were analyzed. Similar to literature reports, amide/aminoquinone nitrogens comprised the major peaks in the solid state spectra of the soil humic and fulvic acids, along with heterocyclic and amino sugar/terminal amino acid nitrogens. Spectra of aquatic samples, including the XAD-4 acids, contain resolved heterocyclic nitrogen peaks in addition to the amide nitrogens. The spectrum of the nitrogen enriched, microbially derived Pony Lake, Antarctica fulvic acid, appeared to contain resonances in the region of pyrazine, imine and/or pyridine nitrogens, which have not been observed previously in soil or aquatic humic substances by 15N NMR. Liquid state 15N NMR experiments were also recorded on the Elliot soil humic acid and Pony Lake fulvic acid, both to examine the feasibility of the techniques, and to determine whether improvements in resolution over the solid state could be realized. For both samples, polarization transfer (DEPT) and indirect detection (1H-15N gHSQC) spectra revealed greater resolution among nitrogens directly bonded to protons. The amide/aminoquinone nitrogens could also be observed by direct detection experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorn, Kevin A.; Cox, Larry G.
2009-02-28
The naturally abundant nitrogen in soil and aquatic NOM samples from the International Humic Substances Society has been characterized by solid state CP/MAS ¹⁵N NMR. Soil samples include humic and fulvic acids from the Elliot soil, Minnesota Waskish peat and Florida Pahokee peat, as well as the Summit Hill soil humic acid and the Leonardite humic acid. Aquatic samples include Suwannee River humic, fulvic and reverse osmosis isolates, Nordic humic and fulvic acids and Pony Lake fulvic acid. Additionally, Nordic and Suwannee River XAD-4 acids and Suwannee River hydrophobic neutral fractions were analyzed. Similar to literature reports, amide/aminoquinone nitrogens comprisedmore » the major peaks in the solid state spectra of the soil humic and fulvic acids, along with heterocyclic and amino sugar/terminal amino acid nitrogens. Spectra of aquatic samples, including the XAD-4 acids, contain resolved heterocyclic nitrogen peaks in addition to the amide nitrogens. The spectrum of the nitrogen enriched, microbially derived Pony Lake, Antarctica fulvic acid, appeared to contain resonances in the region of pyrazine, imine and/or pyridine nitrogens, which have not been observed previously in soil or aquatic humic substances by ¹⁵N NMR. Liquid state ¹⁵N NMR experiments were also recorded on the Elliot soil humic acid and Pony Lake fulvic acid, both to examine the feasibility of the techniques, and to determine whether improvements in resolution over the solid state could be realized. For both samples, polarization transfer (DEPT) and indirect detection (¹H–¹⁵N gHSQC) spectra revealed greater resolution among nitrogens directly bonded to protons. The amide/aminoquinone nitrogens could also be observed by direct detection experiments.« less
PORTRAIT - ASTRONAUT GROUP 16 (NEW AND OLD) - MSC
1963-02-19
S63-01419 (1963) --- The first two groups of astronauts selected by the National Aeronautics and Space Administration (NASA). The original seven Mercury astronauts, selected in April 1959, are seated left to right, L. Gordon Cooper Jr., Virgil I. Grissom, M. Scott Carpenter, Walter M. Schirra Jr., John H. Glenn Jr., Alan B. Shepard Jr. and Donald K. Slayton. The second group of NASA astronauts, named in September 1962 are, standing left to right, Edward H. White II, James A. McDivitt, John W. Young, Elliot M. See Jr., Charles Conrad Jr., Frank Borman, Neil A. Armstrong, Thomas P. Stafford and James A. Lovell Jr. Photo credit: NASA
PORTRAIT - ASTRONAUT GROUP 16 (NEW AND OLD)
1963-02-09
S63-00562 (February 1963) --- Portrait of astronaut groups 1 and 2. The original seven Mercury astronauts selected by NASA in April 1959, are seated (left to right): L. Gordon Cooper Jr., Virgil I. Grissom, M. Scott Carpenter, Water M. Schirra Jr., John H. Glenn Jr., Alan B. Shepard Jr., and Donald K. Slayton. The second group of NASA astronauts, which were named in September 1962, are standing (left to right): Edward H. White II, James A. McDivitt, John W. Young, Elliot M. See Jr., Charles Conrad Jr., Frank Borman, Neil A. Armstrong, Thomas P. Stafford, and James A. Lovell Jr. Photo credit: NASA or National Aeronautics and Space Administration
2003-04-10
KENNEDY SPACE CENTER, FLA. -- (From left) Dean Schaaf, Barksdale site manager and NASA KSC Shuttle Process Integration Ground Operations manager, and Elliot Clement, an United Space Alliance engineer at Kennedy Space Center, inspect bagged pieces of Columbia at the Barksdale Hangar site. KSC workers are participating in the Columbia Recovery efforts at the Lufkin (Texas) Command Center, four field sites in East Texas, and the Barksdale, La., hangar site. KSC is working with representatives from other NASA Centers and with those from a number of federal, state and local agencies in the recovery effort. KSC provides vehicle technical expertise in the field to identify, collect and return Shuttle hardware to KSC.
Vansteenkiste, Maarten; Mouratidis, Athanasios; Lens, Willy
2010-04-01
In two cross-sectional studies we investigated whether soccer players' well-being (Study 1) and moral functioning (Studies 1 and 2) is related to performance-approach goals and to the autonomous and controlling reasons underlying their pursuit. In support of our hypotheses, we found in Study 1 that autonomous reasons were positively associated with vitality and positive affect, whereas controlling reasons were positively related to negative affect and mostly unrelated to indicators of morality. To investigate the lack of systematic association with moral outcomes, we explored in Study 2 whether performance-approach goals or their underlying reasons would yield an indirect relation to moral outcomes through their association with players' objectifying attitude-their tendency to depersonalize their opponents. Structural equation modeling showed that controlling reasons for performance-approach goals were positively associated with an objectifying attitude, which in turn was positively associated to unfair functioning. Results are discussed within the achievement goal perspective (Elliot, 2005) and self-determination theory (Deci & Ryan, 2000).
Test anxiety, perfectionism, goal orientation, and academic performance.
Eum, KoUn; Rice, Kenneth G
2011-03-01
Dimensions of perfectionism and goal orientation have been reported to have differential relationships with test anxiety. However, the degree of inter-relationship between different dimensions of perfectionism, the 2 × 2 model of goal orientations proposed by Elliot and McGregor, cognitive test anxiety, and academic performance indicators is not known. Based on data from 134 university students, we conducted correlation and regression analyses to test associations between adaptive and maladaptive perfectionism, four types of goal orientations, cognitive test anxiety, and two indicators of academic performance: proximal cognitive performance on a word list recall test and distal academic performance in terms of grade point average. Cognitive test anxiety was inversely associated with both performance indicators, and positively associated with maladaptive perfectionism and avoidance goal orientations. Adaptive and maladaptive perfectionism accounted for significant variance in cognitive test anxiety after controlling for approach and avoidance goal orientations. Overall, nearly 50% of the variance in cognitive test anxiety could be attributed to gender, goal orientations, and perfectionism. Results suggested that students who are highly test anxious are likely to be women who endorse avoidance goal orientations and are maladaptively perfectionistic.
Kim, Nam Young; Lee, Hyeon Yong
2016-03-01
To increase the cognitive enhancement provided by Aronia melanocarpa Elliot (Aronia), Aronia was extracted using 70% ethanol solvent and six cycles of intermittent ultrasonication at 120 KHz for 50 min, followed by a rest for 10 min (UE), and was also extracted using 70% ethanol for 24 h at 80°C (EE) as a control process. In both in vivo water maze and passive avoidance tests, the UE showed better performance enhancement than the EE: in the water maze, mice treated with EE and UE showed escape latencies of 62.6 s and 54.3 s, respectively; for passive avoidance, they showed retention times of 45.9 s and 38.9 s, respectively. UE downregulated the expression level of acetylcholinesterase genes to 1.46 times compared with 1.72 for EE. However, there were no significant histological differences in the hippocampus between the mice fed with EE and those fed UE. Additionally, the UE was confirmed to have a greater antioxidant effect, 0.728 versus 0.561 for EE. Comparison of the high-performance liquid chromatography chromatograms of the extracts demonstrates that the intermittent ultrasonication process may improve the cognitive activities of Aronia by eluting higher amounts of cyanidin-3-galactoside (C3G). This work is the first to report that the crude extract from the intermittent ultrasonication process provided better cognitive enhancement than a single major bioactive substance, C3G itself, possibly through the synergistic effects of other anthocyanins present in the extract, such as delphine galactoside, cyanidin arabinoside, and cyanidin glucoside. We also believe that these findings may provide a reliable basis for developing natural plant drugs to compensate for the side effects of purified and/or chemically synthesized single-component drugs rather than to compete with them.
36th International Conference on High Energy Physics
NASA Astrophysics Data System (ADS)
The Australian particle physics community was honoured to host the 36th ICHEP conference in 2012 in Melbourne. This conference has long been the reference event for our international community. The announcement of the discovery of the Higgs boson at the LHC was a major highlight, with huge international press coverage. ICHEP2012 was described by CERN Director-General, Professor Rolf Heuer, as a landmark conference for our field. In additional to the Higgs announcement, important results from neutrino physics, from flavour physics, and from physics beyond the standard model also provided great interest. There were also updates on key accelerator developments such as the new B-factories, plans for the LHC upgrade, neutrino facilities and associated detector developments. ICHEP2012 exceeded the promise expected of the key conference for our field, and really did provide a reference point for the future. Many thanks to the contribution reviewers: Andy Bakich, Csaba Balazs, Nicole Bell, Catherine Buchanan, Will Crump, Cameron Cuthbert, Ben Farmer, Sudhir Gupta, Elliot Hutchison, Paul Jackson, Geng-Yuan Jeng, Archil Kobakhidze, Doyoun Kim, Tong Li, Antonio Limosani (Head Editor), Kristian McDonald, Nikhul Patel, Aldo Saavedra, Mark Scarcella, Geoff Taylor, Ian Watson, Graham White, Tony Williams and Bruce Yabsley.
2015-10-08
Regions with exposed water ice are highlighted in blue in this composite image from New Horizons' Ralph instrument, combining visible imagery from the Multispectral Visible Imaging Camera (MVIC) with infrared spectroscopy from the Linear Etalon Imaging Spectral Array (LEISA). The strongest signatures of water ice occur along Virgil Fossa, just west of Elliot crater on the left side of the inset image, and also in Viking Terra near the top of the frame. A major outcrop also occurs in Baré Montes towards the right of the image, along with numerous much smaller outcrops, mostly associated with impact craters and valleys between mountains. The scene is approximately 280 miles (450 kilometers) across. Note that all surface feature names are informal. http://ppj2:8080/catalog/PIA19963
1981-07-01
Samejima, RR-79-1), suggests that it will be more fruitful to observe the square root of an information function, rather than the information...II44 t&4 ~4J44 AJ.ISN.a -64- 0I 44 0- -J- .00 c;i 0* 0 cIJ II Ys c0 r.M A.LISN30 -65- IV-8 the estimated density functions, g*(r*) , will affect the...Yukihiro NoguchiFaculty of Education Department of Psychology University of Tokyo Elliot Hall Bongo , Bumkyoku 75 East River Road Tokyo, Japan ŕ
Is Downtown Seattle on the Hanging Wall of the Seattle Fault?
NASA Astrophysics Data System (ADS)
Pratt, T. L.
2008-12-01
The Seattle fault is an ~80-km-long thrust or reverse fault that trends east-west beneath the Puget Lowland of western Washington State, and is interpreted to extend beneath the Seattle urban area just south of the downtown area. The fault ruptured about A.D. 930 in a large earthquake that uplifted parts of the Puget Sound shoreline as much as 7 m, caused a tsunami in Puget Sound and extensive landslides throughout the area. Seismic reflection profiles indicate that the fault has 3 or more fault splays that together form the Seattle fault zone. Models for the Seattle fault zone vary considerably, but most models place the northern edge of the Seattle fault zone south of the downtown area. These interpretations require that the fault zone shifts about 2 km to the south in the Seattle area relative to its location to the east (Bellevue) and west (Bainbridge Island). Potential field anomalies, particularly prominent magnetic highs associated with dipping, shallow conglomerate layers, are not continuous in the downtown Seattle area as observed to the east and west. Compilation and re-interpretation of all the existing seismic profiles in the area indicate that the northern strand of the Seattle fault, specifically a fold associated with the northernmost, blind fault strand, lies beneath the northern part of downtown Seattle, about 1.5 to 2 km farther north than has previously been interpreted. This study focuses on one previously unpublished seismic profile in central Puget Sound that shows a remarkable image of the Seattle fault, with shallow subhorizontal layers disrupted or folded by at least two thrust faults and several shallow backthrusts. These apparently Holocene layers are arched gently upwards, with the peak of the anticline in line with Alki and Restoration Points on the east and west sides of Puget Sound, respectively. The profile shows that the shallow part of the northern fault strand dips to the south at about 35 degrees, consistent with the 35 to 40 degree dip previously interpreted from tomography data. A second fault strand about 2 km south of the northern strand causes gentle folding of the Holocene strata. Two prominent backthrusts occur on the south side of the anticline, with the southern backthrust on strike with a prominent scarp on the eastern shoreline. A large erosional paleochannel beneath west Seattle and the Duwamish waterway extends beneath Elliot Bay and obscures potential field anomalies and seismic reflection evidence for the fault strands. However, hints of fault-related features on the profiles in Elliot Bay, and clear images in Lake Washington, indicate that the fault strands extend beneath the city of Seattle in the downtown area. If indeed the northern strand of the Seattle fault lies beneath the northern part of downtown Seattle, the downtown area may experience ground deformation during a major Seattle fault earthquake and that focusing of energy in the fault zone may occur farther north than previously estimated.
Rapid Response Tools and Datasets for Post-fire Hydrological Modeling
NASA Astrophysics Data System (ADS)
Miller, Mary Ellen; MacDonald, Lee H.; Billmire, Michael; Elliot, William J.; Robichaud, Pete R.
2016-04-01
Rapid response is critical following natural disasters. Flooding, erosion, and debris flows are a major threat to life, property and municipal water supplies after moderate and high severity wildfires. The problem is that mitigation measures must be rapidly implemented if they are to be effective, but they are expensive and cannot be applied everywhere. Fires, runoff, and erosion risks also are highly heterogeneous in space, so there is an urgent need for a rapid, spatially-explicit assessment. Past post-fire modeling efforts have usually relied on lumped, conceptual models because of the lack of readily available, spatially-explicit data layers on the key controls of topography, vegetation type, climate, and soil characteristics. The purpose of this project is to develop a set of spatially-explicit data layers for use in process-based models such as WEPP, and to make these data layers freely available. The resulting interactive online modeling database (http://geodjango.mtri.org/geowepp/) is now operational and publically available for 17 western states in the USA. After a fire, users only need to upload a soil burn severity map, and this is combined with the pre-existing data layers to generate the model inputs needed for spatially explicit models such as GeoWEPP (Renschler, 2003). The development of this online database has allowed us to predict post-fire erosion and various remediation scenarios in just 1-7 days for six fires ranging in size from 4-540 km2. These initial successes have stimulated efforts to further improve the spatial extent and amount of data, and add functionality to support the USGS debris flow model, batch processing for Disturbed WEPP (Elliot et al., 2004) and ERMiT (Robichaud et al., 2007), and to support erosion modeling for other land uses, such as agriculture or mining. The design and techniques used to create the database and the modeling interface are readily repeatable for any area or country that has the necessary topography, climate, soil, and land cover datasets.
Does sadness impair color perception? Flawed evidence and faulty methods.
Holcombe, Alex O; Brown, Nicholas J L; Goodbourn, Patrick T; Etz, Alexander; Geukes, Sebastian
2016-01-01
In their 2015 paper, Thorstenson, Pazda, and Elliot offered evidence from two experiments that perception of colors on the blue-yellow axis was impaired if the participants had watched a sad movie clip, compared to participants who watched clips designed to induce a happy or neutral mood. Subsequently, these authors retracted their article, citing a mistake in their statistical analyses and a problem with the data in one of their experiments. Here, we discuss a number of other methodological problems with Thorstenson et al.'s experimental design, and also demonstrate that the problems with the data go beyond what these authors reported. We conclude that repeating one of the two experiments, with the minor revisions proposed by Thorstenson et al., will not be sufficient to address the problems with this work.
NASA Technical Reports Server (NTRS)
Cruikshank, Dale P.; Roush, Ted L.; Owen, Tobias C.; Schmitt, Bernard; Quirico, Eric; Geballe, Thomas R.; deBergh, Catherine; Bartholomew, Mary Jane; DalleOre, Cristina M.; Doute, Sylvain
1999-01-01
We report the spectroscopic detection of H2O ice on Triton, evidenced by the broad absorptions in the near infrared at 1.55 and 2.04 micron. The detection on Triton confirms earlier preliminary studies (D. P. Cruikshank, R. H. Brown, and R. N. Clark, Icarus 58, 293-305, 1984). The spectra support the contention that H2O ice on Triton is in a crystalline (cubic or hexagonal) phase. Our spectra (1.87-2.5 micron) taken over an interval of nearly 3.5 years do not show any significant changes that might relate to reports of changes in Triton's spectral reflectance (B. Buratti, M. D. Hicks, and R. L. Newburn, Jr., Nature 397, 219, 1999), or in Triton's volatile inventory (J. L. Elliot et al., Nature 393, 765-767, 1998).
Performing Theory: Playing in the Music Therapy Discourse.
Kenny, Carolyn
2015-01-01
Performative writing is an art form that seeks to enliven our discourse by including the senses as a primary source of information processing. Through performative writing, one is seduced into engaging with the aesthetic. My art is music. My craft is Music Therapy. My theme is performing theory. Listen to the sound and silence of words, phrases, punctuation, syllables, format. My muses? I thank D. Soyini Madison, Ron Pelias, Philip Glass, Elliot Eisner, and Tom Barone for inspiration, and my teachers/Indigenous Elders and knowledge keepers who embraced the long tradition of oral transmission of knowledge and the healing power of sound. Stay, stay in the presence of the aesthetic. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Skupień, Katarzyna; Kostrzewa-Nowak, Dorota; Oszmiański, Jan; Tarasiuk, Jolanta
2008-05-01
The aim of the present study was to determine in vitro antileukaemic activities of extracts obtained from chokeberry (Aronia melanocarpa [Michx] Elliot) and mulberry (Morus alba L.) leaves against promyelocytic HL60 cell line and its multidrug resistant sublines exhibiting two different MDR phenotypes: HL60/VINC (overexpressing P-glycoprotein) and HL60/DOX (overexpressing MRP1 protein). It was found that the extracts from chokeberry and mulberry leaves were active against the sensitive leukaemic cell line HL60 and retained the in vitro activity against multidrug resistant sublines (HL60/VINC and HL60/DOX). The values of resistance factor (RF) found for these extracts were very low lying in the range 1.2-1.6.
Deep Space Optical Link ARQ Performance Analysis
NASA Technical Reports Server (NTRS)
Clare, Loren; Miles, Gregory
2016-01-01
Substantial advancements have been made toward the use of optical communications for deep space exploration missions, promising a much higher volume of data to be communicated in comparison with present -day Radio Frequency (RF) based systems. One or more ground-based optical terminals are assumed to communicate with the spacecraft. Both short-term and long-term link outages will arise due to weather at the ground station(s), space platform pointing stability, and other effects. To mitigate these outages, an Automatic Repeat Query (ARQ) retransmission method is assumed, together with a reliable back channel for acknowledgement traffic. Specifically, the Licklider Transmission Protocol (LTP) is used, which is a component of the Disruption-Tolerant Networking (DTN) protocol suite that is well suited for high bandwidth-delay product links subject to disruptions. We provide an analysis of envisioned deep space mission scenarios and quantify buffering, latency and throughput performance, using a simulation in which long-term weather effects are modeled with a Gilbert -Elliot Markov chain, short-term outages occur as a Bernoulli process, and scheduled outages arising from geometric visibility or operational constraints are represented. We find that both short- and long-term effects impact throughput, but long-term weather effects dominate buffer sizing and overflow losses as well as latency performance.
Pluto and Charon's Visible Spectrum (3500-9000 Å)
NASA Astrophysics Data System (ADS)
Cook, J. C.; Wyckoff, S.
2003-05-01
Uncertainty in the chemical composition of Pluto's atmosphere severely limits our understanding of its physical properties. The only atmospheric gas identified spectroscopically to date has been CH4 (Young et al., 1997), while an upper limit has been set for CO gas (Young et al., 2001). Infrared detection of surface N2 ice (Owen et al., 1993) together with models based on occultation data (Elliot and Young, 1992) indicate that Pluto's atmosphere is probably dominated by CO and/or N2 (Yelle and Lunine, 1989; Hubbard et al., 1990; Stansberry et al., 1994). If the atmosphere is in vapor pressure equilibrium with the surface ice, then N2 gas would dominate the atmosphere with abundances ≳ 90% (Owen et al., 1993). Here we report on a search to identify atmospheric spectral features using data collected with the Steward Observatory 90'' Bok Telescope and the B & C Spectrograph. Pluto-Charon spectra were obtained on five nights in May and June 2003 using 300 l/mm grating blazed in the blue and red spectral regions. We present spectra covering the visible range from 3500 to 9000 Å : (λ /Δ λ ˜ 750 at 6000 Å), and discuss limits set on gases in the atmosphere and extended exosphere of the Pluto-Charon system. J. C. Cook would like to acknowledge support from NASA Space Grant Fellowship.
Cultural stereotypes and personal beliefs about individuals with dwarfism.
Heider, Jeremy D; Scherer, Cory R; Edlund, John E
2013-01-01
Three studies assessed the content of cultural stereotypes and personal beliefs regarding individuals with dwarfism among "average height" (i.e., non-dwarf) individuals. In Studies 1 and 2, undergraduates from three separate institutions selected adjectives to reflect traits constituting both the cultural stereotype about dwarves and their own personal beliefs about dwarves (cf. Devine & Elliot, 1995). The most commonly endorsed traits for the cultural stereotype tended to be negative (e.g., weird, incapable, childlike); the most commonly endorsed traits for personal beliefs were largely positive (e.g., capable, intelligent, kind). In Study 3, undergraduates from two separate institutions used an open-ended method to indicate their personal beliefs about dwarves (cf. Eagly, Mladinic, & Otto, 1994). Responses contained a mixture of positive and negative characteristics, suggesting a greater willingness to admit to negative personal beliefs using the open-ended method.
Houston, Robert Stroud; Graff, P.J.; Karlstrom, K.E.; Root, Forrest
1977-01-01
Middle Precambrian miogeosynclinal metasedimentary rocks o# the Sierra Madre and Medicine Bow Mountains of southeastern Wyoming contain radioactive quartz-pebble conglomerates of possible economic interest. These conglomerates do not contain ore-grade uranium in surface outcrops, but an earlier report on the geochemistry of the Arrastre Lake area of the Medicine Bow Mountains shows that ore-grade deposits may be present in the subsurface. This report describes the stratigraphy of the host metasedimentary rocks and the stratigraphic setting of the radioactive conglomerates in both the Sierra Madre and Medicine Bow Mountains, and compares these rock units with those of the Blind River-Elliot Lake uranium district in Canada. The location of radioactive .conglomerates is given so that further exploration may be undertaken by interested parties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Nam Lyong, E-mail: nlkang@pusan.ac.kr
2014-12-07
The electron spin relaxation times in a system of electrons interacting with piezoelectric phonons mediated through spin-orbit interactions were calculated using the formula derived from the projection-reduction method. The results showed that the temperature and magnetic field dependence of the relaxation times in InSb and InAs were similar. The piezoelectric material constants obtained by a comparison with the reported experimental result were P{sub pe}=4.0×10{sup 22} eV/m for InSb and P{sub pe}=1.2×10{sup 23} eV/m for InAs. The result also showed that the relaxation of the electron spin by the Elliot-Yafet process is more relevant for InSb than InAs at a low density.
A New Model for the Seasonal Evolution of Triton
NASA Astrophysics Data System (ADS)
Forget, F.; Decamp, N.; Berthier, J.; Le Guyader, C.
2000-10-01
The seasonal evolution of Triton's surface and atmosphere remains poorly understood. No model [1] has been able to fully reproduce the main characterictics of the Voyager 2 observations in 1989 in combination with the "Global warming" recently inferred from stellar occultations [2]. Within this context, we have developped a new thermal model to study the seasonal nitrogen cycle on Triton. The model is the surface part of a Triton atmosphere General circulation model developped at LMD [3]. The nitrogen cycle was found to be very sensitive to Triton complex seasonal variations of the subsolar point latitude, especially during the current decade (south summer solstice). Since only pre-Voyager formulations were available for such a study, this has motivated some new calculations of Triton's motion based on more recent rotationnal elements combined with a relatively complete dynamic solution [4] adapted to Triton. A new analytic formulation suitable for climate modelling has been derived. On this basis, we wish to suggest a new, realistic scenario to explain Triton's apparence and evolution based on solar-induced variation of the frost albedo. Such variations have been observed in Mars CO2 ice seasonal polar caps [5]. Although they seem to result from complex microphysical behavior, they are likely to occur on Triton since both Triton and Mars polar caps are composed of weakly absorbing ice (N2 or CO2) in vapor pressure equilibrium with the main constituant of the atmosphere. [1] e.g. Hansen and Paige, Icarus 99, 273-288 (1992); Brown and Kirk, J. Geophys. Res. 99, 1965-1981 (1994); Spencer and Moore, Icarus 99, 261-272 (1992). [2] Elliot et al., Nature 393, 765-767 (1998). [3] Forget, Descamp and Hourdin, in ``Pluto and Triton, comparisons and evolution over time", Lowell Observatory's fourth annual workshop, Flagstaff, Arizona. (1999) [4] Le Guyader, Astron. Astrophys. 272, 687-694 (1993). [5] Kieffer et al., J. Geophys. Res. 105, 9653-9700 (2000).
Shapes and binary fractions of Jovian Trojans and Hildas through NEOWISE
NASA Astrophysics Data System (ADS)
Sonnett, S.; Mainzer, A.; Grav, T.; Bauer, J.; Masiero, J.; Stevenson, R.; Nugent, C.
2014-07-01
Jovian Trojans (hereafter, Trojans) and Hildas are indicative of planetary migration patterns since their capture and physical state must be explained by dynamical evolution models. Early models of minimal planetary migration necessitate that Trojans were dynamically captured from the giant planet region (e.g., Marzari & Scholl 1998). The Nice model instead suggests that Trojans were injected from the outer solar system during a period of significant giant planet migration (e.g., Morbidelli et al. 2005). A more recent version of the Nice model suggests that asymmetric scatterings and collisions would have taken place, producing dissimilar L4 and L5 clouds (Nesvorny et al. 2013). Each of these formation scenarios predicts a different origin and/or collisional evolution for Trojans, which can be inferred from rotation properties. Namely, the physical shape as a function of size helps determine the degree of collisional processing (Farinella et al. 1992). Also, the binary fraction as a function of separation between the two components can be used to determine the dominant binary formation mechanism and thus helps characterize the dynamical environment (e.g., Kern & Elliot 2006). Rotational variation usually corresponds to elongated shapes, but high amplitudes (> 0.9 magnitudes; Sheppard & Jewitt 2004) can only be explained by close or contact binaries. Therefore, rotational lightcurves can be used to infer both shape and the presence of a close companion. Motivated by the need for more observational constraints on solar system formation models and a poor understanding of the rotation properties and binary fraction of Trojans and Hildas, we are studying their rotational lightcurve amplitudes using infrared photometry from NEOWISE (Mainzer et al. 2011; Grav et al. 2011) in order to determine debiased rotational lightcurve amplitude distributions for various Trojan subpopulations and for Trojans compared to Hildas. Preliminary amplitude distributions show a large fraction of potential close or contact binaries (having Δ m > 0.9). These distributions can be used to constrain the collisional and dynamical history of solar system formation models.
Modern psychometrics for assessing achievement goal orientation: a Rasch analysis.
Muis, Krista R; Winne, Philip H; Edwards, Ordene V
2009-09-01
A program of research is needed that assesses the psychometric properties of instruments designed to quantify students' achievement goal orientations to clarify inconsistencies across previous studies and to provide a stronger basis for future research. We conducted traditional psychometric and modern Rasch-model analyses of the Achievement Goals Questionnaire (AGQ, Elliot & McGregor, 2001) and the Patterns of Adaptive Learning Scale (PALS, Midgley et al., 2000) to provide an in-depth analysis of the two most popular instruments in educational psychology. For Study 1, 217 undergraduate students enrolled in educational psychology courses participated. Thirty-four were male and 181 were female (two did not respond). Participants completed the AGQ in the context of their educational psychology class. For Study 2, 126 undergraduate students enrolled in educational psychology courses participated. Thirty were male and 95 were female (one did not respond). Participants completed the PALS in the context of their educational psychology class. Traditional psychometric assessments of the AGQ and PALS replicated previous studies. For both, reliability estimates ranged from good to very good for raw subscale scores and fit for the models of goal orientations were good. Based on traditional psychometrics, the AGQ and PALS are valid and reliable indicators of achievement goals. Rasch analyses revealed that estimates of reliability for items were very good but respondent ability estimates varied from poor to good for both the AGQ and PALS. These findings indicate that items validly and reliably reflect a group's aggregate goal orientation, but using either instrument to characterize an individual's goal orientation is hazardous.
The Thermal Structure of Triton's Atmosphere: Results from the 1993 and 1995 Occultations
NASA Astrophysics Data System (ADS)
Olkin, C. B.; Elliot, J. L.; Hammel, H. B.; Cooray, A. R.; McDonald, S. W.; Foust, J. A.; Bosh, A. S.; Buie, M. W.; Millis, R. L.; Wasserman, L. H.; Dunham, E. W.; Young, L. A.; Howell, R. R.; Hubbard, W. B.; Hill, R.; Marcialis, R. L.; McDonald, J. S.; Rank, D. M.; Holbrook, J. C.; Reitsema, H. J.
1997-09-01
This paper presents new results about Triton's atmospheric structure from the analysis of all ground-based stellar occultation data recorded to date, including one single-chord occultation recorded on 1993 July 10 and nine occultation lightcurves from the double-star event on 1995 August 14. These stellar occultation observations made both in the visible and in the infrared have good spatial coverage of Triton, including the first Triton central-flash observations, and are the first data to probe the altitude level 20-100 km on Triton. The small-planet lightcurve model of J. L. Elliot and L. A. Young (1992,Astron. J.103,991-1015) was generalized to include stellar flux refracted by the far limb, and then fitted to the data. Values of the pressure, derived from separate immersion and emersion chords, show no significant trends with latitude, indicating that Triton's atmosphere is spherically symmetric at ∼50-km altitude to within the error of the measurements; however, asymmetry observed in the central flash indicates the atmosphere is not homogeneous at the lowest levels probed (∼20-km altitude). From the average of the 1995 occultation data, the equivalent-isothermal temperature of the atmosphere is 47 ± 1 K and the atmospheric pressure at 1400-km radius (∼50-km altitude) is 1.4 ± 0.1 μbar. Both of these are not consistent with a model based on Voyager UVS and RSS observations in 1989 (D. F. Strobel, X. Zhu, M. E. Summers, and M. H. Stevens, 1996,Icarus120,266-289). The atmospheric temperature from the occultation is 5 K colder than that predicted by the model and the observed pressure is a factor of 1.8 greater than the model. In our opinion, the disagreement in temperature and pressure is probably due to modeling problems at the microbar level, since measurements at this level have not previously been made. Alternatively, the difference could be due to seasonal change in Triton's atmospheric structure.
NASA Astrophysics Data System (ADS)
Lin, Huey-Wen; Liu, Keh-Fei
2012-03-01
It is argued by the author that the canonical form of the quark energy-momentum tensor with a partial derivative instead of the covariant derivative is the correct definition for the quark momentum and angular momentum fraction of the nucleon in covariant quantization. Although it is not manifestly gauge-invariant, its matrix elements in the nucleon will be nonvanishing and are gauge-invariant. We test this idea in the path-integral quantization by calculating correlation functions on the lattice with a gauge-invariant nucleon interpolation field and replacing the gauge link in the quark lattice momentum operator with unity, which corresponds to the partial derivative in the continuum. We find that the ratios of three-point to two-point functions are zero within errors for both the u and d quarks, contrary to the case without setting the gauge links to unity.
The effect of colour on children's cognitive performance.
Brooker, Alice; Franklin, Anna
2016-06-01
The presence of red appears to hamper adults' cognitive performance relative to other colours (see Elliot & Maier, 2014, Ann. Rev. Psychol. 65, 95). Here, we investigate whether colour affects cognitive performance in 8- and 9-year-olds. Children completed a battery of tasks once in the presence of a coloured screen that was one of eight colours and once in the presence of a grey screen. Performance was assessed for each colour relative to the grey baseline, and differences across colours were compared. We find a significant difference in performance across colours, with significantly worse performance in the presence of red than grey. The effect of colour did not significantly interact with task. The findings suggest that colour can affect children's cognitive performance and that there is a detrimental effect of red. Findings are related to the adult literature and implications for educational contexts are discussed. © 2015 The British Psychological Society.
Gate-driven pure spin current in graphene
NASA Astrophysics Data System (ADS)
Lin, Xiaoyang; Su, Li; Zhang, Youguang; Bournel, Arnaud; Zhang, Yue; Klein, Jacques-Olivier; Zhao, Weisheng; Fert, Albert
An important challenge of spin current based devices is to realize long-distance transport and efficient manipulation of pure spin current without frequent spin-charge conversions. Here, the mechanism of gate-driven pure spin current in graphene is presented. Such a mechanism relies on the electrical gating of conductivity and spin diffusion length in graphene. The gate-driven feature is adopted to realize the pure spin current demultiplexing operation, which enables gate-controllable distribution of the pure spin current into graphene branches. Compared with Elliot-Yafet spin relaxation mechanism, D'yakonov-Perel spin relaxation mechanism results in more appreciable demultiplexing performance, which also implies a feasible strategy to characterize the spin relaxation mechanisms. The unique feature of the pure spin current demultiplexing operation would pave a way for ultra-low power spin logic beyond CMOS. Supported by the NSFC (61627813, 51602013) and the 111 project (B16001).
Valcheva-Kuzmanova, Stefka; Blagović, Branka; Valić, Srećko
2012-04-01
The fruits of Aronia melanocarpa (Michx.) Elliot contain large amounts of phenolic substances, mainly procyanidins, anthocyanins and other flavonoids, and phenolic acids. The ability of phenolic substances to act as antioxidants has been well established. In this study, we investigated the radical scavenging activity of A. melanocarpa fruit juice (AMFJ). The method used was electron spin resonance (ESR) spectroscopy. The galvinoxyl free radical was used as a scavenging object. AMFJ was added to the galvinoxyl free radical solution. The measure of the radical scavenging activity was the decrease of signal intensity. AMFJ showed a potent antiradical activity causing a strong and rapid decrease of signal intensity as a function of time and juice concentration. This effect of AMFJ was probably due to the activity of its phenolic constituents. The ESR measurements in this study showed a pronounced radical scavenging effect of AMFJ, an important mechanism of its antioxidant activity.
Wei, Jie; Zhang, Guokun; Zhang, Xiao; Xu, Dexin; Gao, Jun; Fan, Jungang; Zhou, Zhiquan
2017-07-26
Aging is the greatest risk factor for most neurodegenerative diseases, which is associated with decreasing cognitive function and significantly affecting life quality in the elderly. Computational analysis suggested that 4 anthocyanins from chokeberry fruit increased Klotho (aging-suppressor) structural stability, so we hypothesized that chokeberry anthocyanins could antiaging. To explore the effects of anthocyanins treatment on brain aging, mice treated with 15 or 30 mg/kg anthocyanins by gavage and injected D-galactose accelerated aging per day. After 8 weeks, cognitive and noncognitive components of behavior were determined. Our studies showed that anthocyanins blocked age-associated cognitive decline and response capacity in senescence accelerated mice. Furthermore, mice treated with anthocyanins-supplemented showed better balance of redox systems (SOD, GSH-PX, and MDA) in all age tests. Three major monoamines were norepinephrine, dopamine, and 5-hydroxytryptamine, and their levels were significantly increased; the levels of inflammatory cytokines (COX2, TGF-β1, and IL-1) transcription and DNA damage were decreased significantly in brains of anthocyanins treated mice compared to aged models. The DNA damage signaling pathway was also regulated with anthocyanins. Our results suggested that anthocyanins was a potential approach for maintaining thinking and memory in aging mice, possibly by regulating the balance of redox system and reducing inflammation accumulation, and the most important factor was inhibiting DNA damage.
The emergence of collective phenomena in systems with random interactions
NASA Astrophysics Data System (ADS)
Abramkina, Volha
Emergent phenomena are one of the most profound topics in modern science, addressing the ways that collectivities and complex patterns appear due to multiplicity of components and simple interactions. Ensembles of random Hamiltonians allow one to explore emergent phenomena in a statistical way. In this work we adopt a shell model approach with a two-body interaction Hamiltonian. The sets of the two-body interaction strengths are selected at random, resulting in the two-body random ensemble (TBRE). Symmetries such as angular momentum, isospin, and parity entangled with complex many-body dynamics result in surprising order discovered in the spectrum of low-lying excitations. The statistical patterns exhibited in the TBRE are remarkably similar to those observed in real nuclei. Signs of almost every collective feature seen in nuclei, namely, pairing superconductivity, deformation, and vibration, have been observed in random ensembles [3, 4, 5, 6]. In what follows a systematic investigation of nuclear shape collectivities in random ensembles is conducted. The development of the mean field, its geometry, multipole collectivities and their dependence on the underlying two-body interaction are explored. Apart from the role of static symmetries such as SU(2) angular momentum and isospin groups, the emergence of dynamical symmetries including the seniority SU(2), rotational symmetry, as well as the Elliot SU(3) is shown to be an important precursor for the existence of geometric collectivities.
2015-10-16
The Ralph instrument on NASA's New Horizons spacecraft detected water ice on Pluto's surface, picking up on the ice's near-infrared spectral characteristics. (See featured image from Oct. 8, 2015.) The middle panel shows a region west of Pluto's "heart" feature -- which the mission team calls Tombaugh Regio -- about 280 miles (450 kilometers) across. It combines visible imagery from Ralph's Multispectral Visible Imaging Camera (MVIC) with infrared spectroscopy from the Linear Etalon Imaging Spectral Array (LEISA). Areas with the strongest water ice spectral signature are highlighted in blue. Major outcrops of water ice occur in regions informally called Viking Terra, along Virgil Fossa west of Elliot crater, and in Baré Montes. Numerous smaller outcrops are associated with impact craters and valleys between mountains. In the lower left panel, LEISA spectra are shown for two regions indicated by cyan and magenta boxes. The white curve is a water ice model spectrum, showing similar features to the cyan spectrum. The magenta spectrum is dominated by methane ice absorptions. The lower right panel shows an MVIC enhanced color view of the region in the white box, with MVIC's blue, red and near-infrared filters displayed in blue, green and red channels, respectively. The regions showing the strongest water ice signature are associated with terrains that are actually a lighter shade of red. http://photojournal.jpl.nasa.gov/catalog/PIA20030
Jaine, Fabrice R. A.; Couturier, Lydie I. E.; Weeks, Scarla J.; Townsend, Kathy A.; Bennett, Michael B.; Fiora, Kym; Richardson, Anthony J.
2012-01-01
Manta rays Manta alfredi are present all year round at Lady Elliot Island (LEI) in the southern Great Barrier Reef, Australia, with peaks in abundance during autumn and winter. Drivers influencing these fluctuations in abundance of M. alfredi at the site remain uncertain. Based on daily count, behavioural, weather and oceanographic data collected over a three-year period, this study examined the link between the relative number of sightings of manta rays at LEI, the biophysical environment, and the habitat use of individuals around the LEI reef using generalised additive models. The response variable in each of the three generalised additive models was number of sightings (per trip at sea) of cruising, cleaning or foraging M. alfredi. We used a set of eleven temporal, meteorological, biological, oceanographic and lunar predictor variables. Results for cruising, cleaning and foraging M. alfredi explained 27.5%, 32.8% and 36.3% of the deviance observed in the respective models and highlighted five predictors (year, day of year, wind speed, chlorophyll-a concentration and fraction of moon illuminated) as common influences to the three models. There were more manta rays at LEI in autumn and winter, slower wind speeds, higher productivity, and around the new and full moon. The winter peak in sightings of foraging M. alfredi was found to precede peaks in cleaning and cruising activity around the LEI reef, which suggests that enhanced food availability may be a principal driver for this seasonal aggregation. A spatial analysis of behavioural observations highlighted several sites around the LEI reef as ‘multi-purpose’ areas where cleaning and foraging activities commonly occur, while the southern end of the reef is primarily a foraging area. The use of extensive citizen science datasets, such as those collected by dive operators in this study, is encouraged as they can provide valuable insights into a species' ecology. PMID:23056255
Effect of Aronia melanocarpa fruit juice on amiodarone-induced pneumotoxicity in rats.
Valcheva-Kuzmanova, Stefka; Stavreva, Galya; Dancheva, Violeta; Terziev, Ljudmil; Atanasova, Milena; Stoyanova, Angelina; Dimitrova, Anelia; Shopova, Veneta
2014-04-01
The fruits of Aronia melanocarpa (Michx.) Elliot is extremely rich in biologically active polyphenols. We studied the protective effect of A. melanocarpa fruit juice (AMFJ) in a model of amiodarone (AD)-induced pneumotoxicity in rats. AD was instilled intratracheally on days 0 and 2 (6.25 mg/kg). AMFJ (5 mL/kg and 10 mL/kg) was given orally from day 1 to days 2, 4, 9, and 10 to rats, which were sacrificed respectively on days 3, 5, 10, and 28 when biochemical, cytological, and immunological assays were performed. AMFJ antagonized AD-induced increase of the lung weight coefficient. In bronchoalveolar lavage fluid, AD increased significantly the protein content, total cell count, polymorphonuclear cells, lymphocytes and the activity of lactate dehydrogenase, acid phosphatase and alkaline phosphatase on days 3 and 5. In AMFJ-treated rats these indices of direct toxic damage did not differ significantly from the control values. In lung tissue, AD induced oxidative stress measured by malondialdehyde content and fibrosis assessed by the hydroxyproline level. AMFJ prevented these effects of AD. In rat serum, AD caused a significant elevation of interleukin IL-6 on days 3 and 5, and a decrease of IL-10 on day 3. In AMFJ-treated rats, these indices of inflammation had values that did not differ significantly from the control ones. AMFJ could have a protective effect against AD-induced pulmonary toxicity as evidenced by the reduced signs of AD-induced direct toxic damage, oxidative stress, inflammation, and fibrosis.
Electron spin relaxation in two polymorphic structures of GaN
NASA Astrophysics Data System (ADS)
Kang, Nam Lyong
2015-03-01
The relaxation process of electron spin in systems of electrons interacting with piezoelectric deformation phonons that are mediated through spin-orbit interactions was interpreted from a microscopic point of view using the formula for the electron spin relaxation times derived by a projection-reduction method. The electron spin relaxation times in two polymorphic structures of GaN were calculated. The piezoelectric material constant for the wurtzite structure obtained by a comparison with a previously reported experimental result was {{P}pe}=1.5 × {{10}29} eV {{m}-1}. The temperature and magnetic field dependence of the relaxation times for both wurtzite and zinc-blende structures were similar, but the relaxation times in zinc-blende GaN were smaller and decreased more rapidly with increasing temperature and magnetic field than that in wurtzite GaN. This study also showed that the electron spin relaxation for wurtzite GaN at low density could be explained by the Elliot-Yafet process but not for zinc-blende GaN in the metallic regime.
Revisiting the 1988 Pluto Occultation
NASA Astrophysics Data System (ADS)
Bosh, Amanda S.; Dunham, Edward W.; Young, Leslie A.; Slivan, Steve; Barba née Cordella, Linda L.; Millis, Robert L.; Wasserman, Lawrence H.; Nye, Ralph
2015-11-01
In 1988, Pluto's atmosphere was surmised to exist because of the surface ices that had been detected through spectroscopy, but it had not yet been directly detected in a definitive manner. The key to making such a detection was the stellar occultation method, used so successfully for the discovery of the Uranian rings in 1977 (Elliot et al. 1989; Millis et al. 1993) and before that for studies of the atmospheres of other planets.On 9 June 1988, Pluto occulted a star, with its shadow falling over the South Pacific Ocean region. One team of observers recorded this event from the Kuiper Airborne Observatory, while other teams captured the event from various locations in Australia and New Zealand. Preceding this event, extensive astrometric observations of Pluto and the star were collected in order to refine the prediction.We will recount the investigations that led up to this important Pluto occultation, discuss the unexpected atmospheric results, and compare the 1988 event to the recent 2015 event whose shadow followed a similar track through New Zealand and Australia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cloutier, N.R.; Clulow, F.V.; Lim, T.P.
The 226Ra level in vegetation growing on U mine tailings in Elliot Lake, Ontario, Canada, was 211 + 22 mBq g-1 (dry weight) compared to less than 7 mBq g-1 (dry weight) in material from a control site. Skeletons of meadow voles (Microtus pennsylvanicus) established on the tailings had concentrations of 226Ra of 6083 +/- 673 mBq per animal in winter; 7163 +/- 1077 mBq per animal in spring; 1506 +/- 625 mBq per animal in summer; and 703 +/- 59 mBq per animal in fall, compared to less than 7 mBq per animal in controls. The /sup 226/Ra transfermore » coefficient from vegetation to voles (defined as total millibecquerels of /sup 226/Ra in adult vole per total millibecquerels of 226Ra consumed by the vole in its lifetime) was calculated as 4.6 +/- 2.9 X 10(-2) in summer and 2.8 +/- 0.6 X 10(-2) in fall.« less
Electron spin resonance measurement of radical scavenging activity of Aronia melanocarpa fruit juice
Valcheva-Kuzmanova, Stefka; Blagović, Branka; Valić, Srećko
2012-01-01
Background: The fruits of Aronia melanocarpa (Michx.) Elliot contain large amounts of phenolic substances, mainly procyanidins, anthocyanins and other flavonoids, and phenolic acids. The ability of phenolic substances to act as antioxidants has been well established. Objective: In this study, we investigated the radical scavenging activity of A. melanocarpa fruit juice (AMFJ). Materials and Methods: The method used was electron spin resonance (ESR) spectroscopy. The galvinoxyl free radical was used as a scavenging object. AMFJ was added to the galvinoxyl free radical solution. The measure of the radical scavenging activity was the decrease of signal intensity. Results: AMFJ showed a potent antiradical activity causing a strong and rapid decrease of signal intensity as a function of time and juice concentration. This effect of AMFJ was probably due to the activity of its phenolic constituents. Conclusion: The ESR measurements in this study showed a pronounced radical scavenging effect of AMFJ, an important mechanism of its antioxidant activity. PMID:22701293
Effects of wireless packet loss in industrial process control systems.
Liu, Yongkang; Candell, Richard; Moayeri, Nader
2017-05-01
Timely and reliable sensing and actuation control are essential in networked control. This depends on not only the precision/quality of the sensors and actuators used but also on how well the communications links between the field instruments and the controller have been designed. Wireless networking offers simple deployment, reconfigurability, scalability, and reduced operational expenditure, and is easier to upgrade than wired solutions. However, the adoption of wireless networking has been slow in industrial process control due to the stochastic and less than 100% reliable nature of wireless communications and lack of a model to evaluate the effects of such communications imperfections on the overall control performance. In this paper, we study how control performance is affected by wireless link quality, which in turn is adversely affected by severe propagation loss in harsh industrial environments, co-channel interference, and unintended interference from other devices. We select the Tennessee Eastman Challenge Model (TE) for our study. A decentralized process control system, first proposed by N. Ricker, is adopted that employs 41 sensors and 12 actuators to manage the production process in the TE plant. We consider the scenario where wireless links are used to periodically transmit essential sensor measurement data, such as pressure, temperature and chemical composition to the controller as well as control commands to manipulate the actuators according to predetermined setpoints. We consider two models for packet loss in the wireless links, namely, an independent and identically distributed (IID) packet loss model and the two-state Gilbert-Elliot (GE) channel model. While the former is a random loss model, the latter can model bursty losses. With each channel model, the performance of the simulated decentralized controller using wireless links is compared with the one using wired links providing instant and 100% reliable communications. The sensitivity of the controller to the burstiness of packet loss is also characterized in different process stages. The performance results indicate that wireless links with redundant bandwidth reservation can meet the requirements of the TE process model under normal operational conditions. When disturbances are introduced in the TE plant model, wireless packet loss during transitions between process stages need further protection in severely impaired links. Techniques such as retransmission scheduling, multipath routing and enhanced physical layer design are discussed and the latest industrial wireless protocols are compared. Published by Elsevier Ltd.
Effects of Wireless Packet Loss in Industrial Process Control Systems
Liu, Yongkang; Candell, Richard; Moayeri, Nader
2017-01-01
Timely and reliable sensing and actuation control are essential in networked control. This depends on not only the precision/quality of the sensors and actuators used but also on how well the communications links between the field instruments and the controller have been designed. Wireless networking offers simple deployment, reconfigurability, scalability, and reduced operational expenditure, and is easier to upgrade than wired solutions. However, the adoption of wireless networking has been slow in industrial process control due to the stochastic and less than 100 % reliable nature of wireless communications and lack of a model to evaluate the effects of such communications imperfections on the overall control performance. In this paper, we study how control performance is affected by wireless link quality, which in turn is adversely affected by severe propagation loss in harsh industrial environments, co-channel interference, and unintended interference from other devices. We select the Tennessee Eastman Challenge Model (TE) for our study. A decentralized process control system, first proposed by N. Ricker, is adopted that employs 41 sensors and 12 actuators to manage the production process in the TE plant. We consider the scenario where wireless links are used to periodically transmit essential sensor measurement data, such as pressure, temperature and chemical composition to the controller as well as control commands to manipulate the actuators according to predetermined setpoints. We consider two models for packet loss in the wireless links, namely, an independent and identically distributed (IID) packet loss model and the two-state Gilbert-Elliot (GE) channel model. While the former is a random loss model, the latter can model bursty losses. With each channel model, the performance of the simulated decentralized controller using wireless links is compared with the one using wired links providing instant and 100 % reliable communications. The sensitivity of the controller to the burstiness of packet loss is also characterized in different process stages. The performance results indicate that wireless links with redundant bandwidth reservation can meet the requirements of the TE process model under normal operational conditions. When disturbances are introduced in the TE plant model, wireless packet loss during transitions between process stages need further protection in severely impaired links. Techniques such as retransmission scheduling, multipath routing and enhanced physical layer design are discussed and the latest industrial wireless protocols are compared. PMID:28190566
Stellar Occultation Probe of Triton's Atmosphere
NASA Technical Reports Server (NTRS)
Elliot, James L.
1998-01-01
The goals of this research were (i) to better characterize Triton's atmospheric structure by probing a region not well investigated by Voyager and (ii) to begin acquiring baseline data for an investigation of the time evolution of the atmosphere which will set limits on the thermal conductivity of the surface and the total mass of N2 in the atmosphere. Our approach was to use observations (with the Kuiper Airborne Observatory) of a stellar occultation by Triton that was predicted to occur on 1993 July 10. As described in the attached reprint, we achieved these objectives through observation of this occultation and a subsequent one with the KAO in 1995. We found new results about Triton's atmospheric structure from the analysis of the two occultations observed with the KAO and ground-based data. These stellar occultation observations made both in the visible and infrared, have good spatial coverage of Triton including the first Triton central-flash observations, and are the first data to probe the 20-100 km altitude level on Triton. The small-planet light curve model of Elliot and Young (AJ 103, 991-1015) was generalized to include stellar flux refracted by the far limb, and then fitted to the data. Values of the pressure, derived from separate immersion and emersion chords, show no significant trends with latitude indicating that Triton's atmosphere is spherically symmetric at approximately 50 km altitude to within the error of the measurements. However, asymmetry observed in the central flash indicates the atmosphere is not homogeneous at the lowest levels probed (approximately 20 km altitude). From the average of the 1995 occultation data, the equivalent-isothermal temperature of the atmosphere is 47 +/- 1 K and the atmospheric pressure at 1400 km radius (approximately 50 km altitude) is 1.4 +/- 0.1 microbar. Both of these are not consistent with a model based on Voyager UVS and RSS observations in 1989 (Strobel et al, Icarus 120, 266-289). The atmospheric temperature from the occultation is 5 K colder than that predicted by the model and the observed pressure is a factor of 1.8 greater than the model.
Hamylton, Sarah
2014-01-01
A geomorphic assessment of reef system calcification is conducted for past (3200 Ka to present), present and future (2010–2100) time periods. Reef platform sediment production is estimated at 569 m3 yr−1 using rate laws that express gross community carbonate production as a function of seawater aragonite saturation, community composition and rugosity and incorporating estimates of carbonate removal from the reef system. Key carbonate producers including hard coral, crustose coralline algae and Halimeda are mapped accurately (mean R2 = 0.81). Community net production estimates correspond closely to independent census-based estimates made in-situ (R2 = 0.86). Reef-scale outputs are compared with historic rates of production generated from (i) radiocarbon evidence of island deposition initiation around 3200 years ago, and (ii) island volume calculated from a high resolution island digital elevation model. Contemporary carbonate production rates appear to be remarkably similar to historical values of 573 m3 yr−1. Anticipated future seawater chemistry parameters associated with an RCP8.5 emissions scenario are employed to model rates of net community calcification for the period 2000–2100 on the basis of an inorganic aragonite precipitation law, under the assumption of constant benthic community character. Simulations indicate that carbonate production will decrease linearly to a level of 118 m3 yr−1 by 2100 and that by 2150 aragonite saturation levels may no longer support the positive budgetary status necessary to sustain island accretion. Novel aspects of this assessment include the development of rate law parameters to realistically represent the variable composition of coral reef benthic carbonate producers, incorporation of three dimensional rugosity of the entire reef platform and the coupling of model outputs with both historical radiocarbon dating evidence and forward hydrochemical projections to conduct an assessment of island evolution through time. By combining several lines of evidence in a deterministic manner, an assessment of changes in carbonate production is carried out that has tangible geomorphic implications for sediment availability and associated island evolution. PMID:24759700
Hamylton, Sarah
2014-01-01
A geomorphic assessment of reef system calcification is conducted for past (3200 Ka to present), present and future (2010-2100) time periods. Reef platform sediment production is estimated at 569 m3 yr-1 using rate laws that express gross community carbonate production as a function of seawater aragonite saturation, community composition and rugosity and incorporating estimates of carbonate removal from the reef system. Key carbonate producers including hard coral, crustose coralline algae and Halimeda are mapped accurately (mean R2 = 0.81). Community net production estimates correspond closely to independent census-based estimates made in-situ (R2 = 0.86). Reef-scale outputs are compared with historic rates of production generated from (i) radiocarbon evidence of island deposition initiation around 3200 years ago, and (ii) island volume calculated from a high resolution island digital elevation model. Contemporary carbonate production rates appear to be remarkably similar to historical values of 573 m3 yr-1. Anticipated future seawater chemistry parameters associated with an RCP8.5 emissions scenario are employed to model rates of net community calcification for the period 2000-2100 on the basis of an inorganic aragonite precipitation law, under the assumption of constant benthic community character. Simulations indicate that carbonate production will decrease linearly to a level of 118 m3 yr-1 by 2100 and that by 2150 aragonite saturation levels may no longer support the positive budgetary status necessary to sustain island accretion. Novel aspects of this assessment include the development of rate law parameters to realistically represent the variable composition of coral reef benthic carbonate producers, incorporation of three dimensional rugosity of the entire reef platform and the coupling of model outputs with both historical radiocarbon dating evidence and forward hydrochemical projections to conduct an assessment of island evolution through time. By combining several lines of evidence in a deterministic manner, an assessment of changes in carbonate production is carried out that has tangible geomorphic implications for sediment availability and associated island evolution.
Pluto's Lower Atmosphere from Stellar Occultations
NASA Astrophysics Data System (ADS)
Young, Leslie; Buie, M. W.; Olkin, C. B.; Young, E. F.; French, R. G.; Howell, R. R.
2008-09-01
Ever since the Pluto occultation of 1988, the nature of Pluto's lower atmosphere has been a mystery: the lightcurve shows a difference between the upper and lower atmosphere, but it has been unclear whether this is due to hazes, a steep thermal gradient, or a combination of the two (Elliot & Young, 1992 AJ 103, 991; Hubbard et al. 1990, Icarus, 84, 1) Recent high-quality lightcurves allow us to place limits on the haze in Pluto's atmosphere. Especially important is the dual-wavelength (0.5 and 0.8 micron) occultation observed from Mount John Observatory in New Zealand on 2007 July 31. This site was 60 ± 4 km from the central track of the shadow, and the lightcurves clearly show a central flash, or a brightening due to strong lateral refocusing and the convergence of multiple images around the limb of an elliptical atmosphere. These lightcurves constrain the structure of the lower atmosphere in three ways. First, the surface-grazing ray must have a large enough bending angle to reach the center of the shadow. Second, haze of sufficient optical depth to affect the main drop in the lightcurve will also decrease the height of the central flash. The height and location of the central flash can be well modeled with a clear atmosphere. Third, hazes of the size expected at Pluto will have a wavelength-dependent absorption, but the red and blue channels of the Mount John lightcurves show no variation with wavelength. We will discuss limits on the hazes, and place these limits in the context of Triton hazes, heating by dust, and New Horizons detection limits.
Effect of Aronia melanocarpa fruit juice on amiodarone-induced pneumotoxicity in rats
Valcheva-Kuzmanova, Stefka; Stavreva, Galya; Dancheva, Violeta; Terziev, Ljudmil; Atanasova, Milena; Stoyanova, Angelina; Dimitrova, Anelia; Shopova, Veneta
2014-01-01
Background: The fruits of Aronia melanocarpa (Michx.) Elliot is extremely rich in biologically active polyphenols. Objective: We studied the protective effect of A. melanocarpa fruit juice (AMFJ) in a model of amiodarone (AD)-induced pneumotoxicity in rats. Materials and Methods: AD was instilled intratracheally on days 0 and 2 (6.25 mg/kg). AMFJ (5 mL/kg and 10 mL/kg) was given orally from day 1 to days 2, 4, 9, and 10 to rats, which were sacrificed respectively on days 3, 5, 10, and 28 when biochemical, cytological, and immunological assays were performed. Results: AMFJ antagonized AD-induced increase of the lung weight coefficient. In bronchoalveolar lavage fluid, AD increased significantly the protein content, total cell count, polymorphonuclear cells, lymphocytes and the activity of lactate dehydrogenase, acid phosphatase and alkaline phosphatase on days 3 and 5. In AMFJ-treated rats these indices of direct toxic damage did not differ significantly from the control values. In lung tissue, AD induced oxidative stress measured by malondialdehyde content and fibrosis assessed by the hydroxyproline level. AMFJ prevented these effects of AD. In rat serum, AD caused a significant elevation of interleukin IL-6 on days 3 and 5, and a decrease of IL-10 on day 3. In AMFJ-treated rats, these indices of inflammation had values that did not differ significantly from the control ones. Conclusion: AMFJ could have a protective effect against AD-induced pulmonary toxicity as evidenced by the reduced signs of AD-induced direct toxic damage, oxidative stress, inflammation, and fibrosis. PMID:24914278
Micropropagation of chokeberry by in vitro axillary shoot proliferation.
Litwińczuk, Wojciech
2013-01-01
The black chokeberry-aronia (Aronia melanocarpa Elliot) is a shrub native to North America although nowadays well known in Eastern Europe. The fruits are regarded as the richest source of antioxidant phytonutrients among fruit crops and vegetables. Chokeberries can be easily propagated by seeds but this method is not recommended. Micropropagation is far more efficient than other conventional cloning methods like layering or softwood cuttings. Aronia clones are propagated in vitro through four- or three-stage method based on subculturing of shoot explants. The double diluted MS or full strength MS medium with elevated 50% Ca(2+) and Mg(2+) content are used in the initiation and proliferation chokeberry in vitro cultures, respectively. They are supplemented with 0.5-1.0 mg LBA, and 0.05 mg LIBA. The double-phase medium is recommended in the last passage before shoot rooting. The regenerated shoots could be rooted both in vitro on double diluted MS with 0.05 mg L(-1) IBA or in vivo in peat and perlite substrate and subsequently grown in the greenhouse.
View of Commemorative plaque left on moon at Hadley-Apennine landing site
1971-08-01
AS15-88-11894 (31 July-2 Aug. 1971) --- A close-up view of a commemorative plaque left on the moon at the Hadley-Apennine landing site in memory of 14 NASA astronauts and USSR cosmonauts, now deceased. Their names are inscribed in alphabetical order on the plaque. The plaque was stuck in the lunar soil by astronauts David R. Scott, commander, and James B. Irwin, lunar module pilot, during their Apollo 15 lunar surface extravehicular activity (EVA). The names on the plaque are Charles A. Bassett II, Pavel I. Belyayev, Roger B. Chaffee, Georgi Dobrovolsky, Theodore C. Freeman, Yuri A. Gagarin, Edward G. Givens Jr., Virgil I. Grissom, Vladimir Komarov, Viktor Patsayev, Elliot M. See Jr., Vladislav Volkov, Edward H. White II, and Clifton C. Williams Jr. The tiny, man-like object represents the figure of a fallen astronaut/cosmonaut. While astronauts Scott and Irwin descended in the Lunar Module (LM) "Falcon" to explore the Hadley-Apennine area of the moon, astronaut Alfred M. Worden, command module pilot, remained with the Command and Service Modules (CSM) in lunar orbit.
[Do mastery goals buffer self-esteem from the threat of failure?].
Niiya, Yu; Crocker, Jennifer
2007-12-01
Self-esteem is vulnerable when failure occurs in the domain where people base their self-worth (Crocker & Wolfe, 2001). We tested whether learning orientations can reduce the vulnerability of self-esteem associated with contingent self-worth and encourage persistence following failure. Our past research (Niiya, Crocker, & Bartmess, 2004) indicated that people who base their self-worth on academics maintain their self-esteem following failure when they are primed with an incremental theory of intelligence. Our present study extends these findings by (a) examining whether mastery goals (Elliot & Church, 1997) can also buffer self-esteem from failure, (b) using a different manipulation of success and failure, (c) using a different task, and (d) including a measure of persistence. We found that college students who based their self-esteem on academic competence reported lower self-esteem following failure than following success when they had low mastery goals, but the effect of success and failure was eliminated when students had high mastery goals. Moreover, high mastery students showed greater persistence following failure than low mastery students. The study provided converging evidence that learning orientations buffer self-esteem from failure.
The role of proximal social contexts: Assessing stigma-by-association effects on leader appraisals.
Hernandez, Morela; Avery, Derek R; Tonidandel, Scott; Hebl, Mikki R; Smith, Alexis N; McKay, Patrick F
2016-01-01
Prior research suggests that segregation in the U.S. workplace is on the rise (Hellerstein, Neumark, & McInerney, 2008); as such, leaders are more likely to lead groups of followers composed primarily of their own race (Elliot & Smith, 2001; Smith & Elliott, 2002). Drawing from theory on stigma-by-association, the authors posit that such segregated proximal social contexts (i.e., the leader's group of followers) can have detrimental effects on leader appraisals. Specifically, they argue that leaders of mostly Black follower groups experience stigmatization based on race stereotypic beliefs, which affects how they are viewed in the eyes of observers. The results of a large field study show performance evaluations generally tend to be lower when the proportion of Black followers is higher. Moreover, 3 experiments demonstrate that the impact of proximal social contexts extends to other outcomes (i.e., perceptions of market value and competency) but appears limited to those who are less internally and externally motivated to control their prejudice. Taken together, these findings explain how workplace segregation systematically can create a particular disadvantage for Black leaders. (c) 2016 APA, all rights reserved).
Saturn Ring-Plane Crossing, may 1995
NASA Astrophysics Data System (ADS)
Bosh, Amanda
1995-07-01
In 1995-1996, the Earth and the Sun will pass through Saturn's ring plane. The Earth will pass through 3 times (22 May 1995, 10 August 1995, 11 Feb 1996), and the Sun will pass through once (19 November 1995). All but the 11 Feb 1996 event will be visible from HST. During the crossings of the Earth through Saturn's ring plane, the rings will become very thin and dark. By monitoring the brightness of the rings as they become very thin, we will be able to determine the time of ring-plane crossing and the residual brightness of the rings at this time. The time of the ring- plane crossing will place additional constraints on the precession rate of Saturn's pole. The recent occultations by Saturn's rings have produced a measurement of this value, but it is not known very well (French et al., 1993; Bosh, 1994; Elliot et al., 1993). A measure of the brightness of the rings in their edge-on configuration, combined with photometric properties of the rings derived from early calibration observations will allow us to determine the thickness of the rings.
Knoll, Fabien; Bordy, Emese M.; de Kock, Michiel O.; Redelstorff, Ragna
2017-01-01
Fragmentary caudal ends of the left and right mandible assigned to Lesothosaurus diagnosticus, an early ornithischian, was recently discovered in the continental red bed succession of the upper Elliot Formation (Lower Jurassic) at Likhoele Mountain (Mafeteng District) in Lesotho. Using micro-CT scanning, this mandible could be digitally reconstructed in 3D. The replacement teeth within the better preserved (left) dentary were visualised. The computed tomography dataset suggests asynchronous tooth replacement in an individual identified as an adult on the basis of bone histology. Clear evidence for systematic wear facets created by attrition is lacking. The two most heavily worn teeth are only apically truncated. Our observations of this specimen as well as others do not support the high level of dental wear expected from the semi-arid palaeoenvironment in which Lesothosaurus diagnosticus lived. Accordingly, a facultative omnivorous lifestyle, where seasonality determined the availability, quality, and abundance of food is suggested. This would have allowed for adaptability to episodes of increased environmental stress. PMID:28265518
Pluto’s Atmosphere from the 23 June 2011 Stellar Occultation: Airborne and Ground Observations
NASA Astrophysics Data System (ADS)
Person, Michael J.; Bosh, A. S.; Levine, S. E.; Gulbis, A. A. S.; Zangari, A. M.; Zuluaga, C. A.; Dunham, E. W.; Pasachoff, J. M.; Babcock, B. A.; Pandey, S.; Armhein, D.; Sallum, S.; Tholen, D. J.; Collins, P.; Bida, T.; Taylor, B.; Wolf, J.; Meyer, A.; Pfueller, E.; Wiedermann, M.; Roesser, H.; Lucas, R.; Kakkala, M.; Ciotti, J.; Plunkett, S.; Hiraoka, N.; Best, W.; Pilger, E. L.; Miceli, M.; Springmann, A.; Hicks, M.; Thackeray, B.; Emery, J.; Rapoport, S.; Ritchie, I.
2012-10-01
The double stellar occultation by Pluto and Charon of 2011 June 23 was observed from numerous ground stations as well as the Stratospheric Observatory for Infrared Astronomy (SOFIA). This first airborne occultation observation since 1995 resulted in the best occultation chords recorded for the event, in three optical wavelength bands. The data obtained from SOFIA were combined with chords obtained from the ground at the IRTF (including a full spectral light curve), the USNO--Flagstaff Station, and Leeward Community College to give a detailed profile of Pluto’s atmosphere. The data show a return to the distinct upper and lower atmospheric regions with a knee, or kink in the light curves separating them as was observed in 1988 (Millis et al. 1993), rather than the smoothly transitioning bowl-shaped light curves of recent years (Elliot et al. 2007). We analyze the upper atmosphere by fitting a model to all of the light curves obtained, resulting in a half-light radius of 1288 ± 1 km. We analyze the lower atmosphere with two different methods to provide results under the separate assumptions of particulate haze and a strong thermal gradient. Results indicate that the lower atmosphere evolves on short seasonal timescales, changing between 1988 and 2006, and then returning to approximately the 1988 state in 2011, though at significantly higher pressures. Throughout these changes, the upper atmosphere remains remarkably stable in structure, again excepting the overall pressure changes. No evidence of the onset of atmospheric collapse predicted by frost migration models is yet seen, and the atmosphere appears to be remaining at a stable pressure level. This work was supported in part by NASA Planetary Astronomy grants to MIT (NNX10AB27G) and Williams College (NNX08AO50G, NNH11ZDA001N), as well as grants from USRA (#8500-98-003) and Ames Research (#NAS2-97-01) to Lowell Observatory.
The 2003 November 14 occultation by Titan of TYC 1343-1865-1. II. Analysis of light curves
NASA Astrophysics Data System (ADS)
Zalucha, A.; Fitzsimmons, A.; Elliot, J. L.; Thomas-Osip, J.; Hammel, H. B.; Dhillon, V. S.; Marsh, T. R.; Taylor, F. W.; Irwin, P. G. J.
2007-12-01
We observed a stellar occultation by Titan on 2003 November 14 from La Palma Observatory using ULTRACAM with three Sloan filters: u, g, and i (358, 487, and 758 nm, respectively). The occultation probed latitudes 2° S and 1° N during immersion and emersion, respectively. A prominent central flash was present in only the i filter, indicating wavelength-dependent atmospheric extinction. We inverted the light curves to obtain six lower-limit temperature profiles between 335 and 485 km (0.04 and 0.003 mb) altitude. The i profiles agreed with the temperature measured by the Huygens Atmospheric Structure Instrument [Fulchignoni, M., and 43 colleagues, 2005. Nature 438, 785-791] above 415 km (0.01 mb). The profiles obtained from different wavelength filters systematically diverge as altitude decreases, which implies significant extinction in the light curves. Applying an extinction model [Elliot, J.L., Young, L.A., 1992. Astron. J. 103, 991-1015] gave the altitudes of line of sight optical depth equal to unity: 396±7 and 401±20 km ( u immersion and emersion); 354±7 and 387±7 km ( g immersion and emersion); and 336±5 and 318±4 km ( i immersion and emersion). Further analysis showed that the optical depth follows a power law in wavelength with index 1.3±0.2. We present a new method for determining temperature from scintillation spikes in the occulting body's atmosphere. Temperatures derived with this method are equal to or warmer than those measured by the Huygens Atmospheric Structure Instrument. Using the highly structured, three-peaked central flash, we confirmed the shape of Titan's middle atmosphere using a model originally derived for a previous Titan occultation [Hubbard, W.B., and 45 colleagues, 1993. Astron. Astrophys. 269, 541-563].
NASA Astrophysics Data System (ADS)
Zalucha, A.; Elliot, J. L.; Fitzsimmons, A.; Dhillon, V.; Marsh, T.; Hammel, H. B.; Irwin, P.; Thomas-Osip, J.; Taylor, F.
2005-08-01
A stellar occultation by Titan on 2003 Nov. 14 was observed from La Palma Observatory (Fitzsimmons et al., RAS Time Domain Astrophysics, 2004) using ULTRACAM with three Sloan filters: u', g', and i' (350, 480, and 770 nm, respectively; Dhillon and Marsh, NewAR, 25, 91, 2001). The latitudes probed during immersion and emersion were 1.1S and 1.8N, respectively. A central flash was seen in only the i' filter, indicating wavelength-dependent atmospheric extinction. The light curves were inverted to obtain six lower-limit temperature profiles between 360 and 500 km (30 and 2 microbar) altitude. The i' profiles agreed with the model of Yelle (ApJ, 383, 380, 1991) above 415 km (10 microbar). The temperature profiles are expected to be independent of wavelength; instead, it is found that the profiles obtained at different wavelengths diverged as altitude decreases, which implies significant extinction in the light curves. The onset of extinction occurred between 550 and 600 km (0.9 and 0.4 microbar) altitude with optical depth increasing below this height. This is ˜ 50-100 km higher than the detached haze layer seen by Cassini in 2004 (Porco et al., Nature, 434, 159, 2005). No discrete haze layers have yet been resolved in our data. Applying the model used by Elliot and Young (AJ, 103, 991,1992) gives the altitudes of optical depth equal to unity: 382 ± 5 km and 436 ± 5 km (u' immersion and emersion); 406 ± 4 km and 403 ± 10 km (g' immersion and emersion); and 345 ± 5 km and 326 ± 3 km (i' immersion and emersion). Another method shows that the optical depth behaved as a power law in wavelength, with exponent approximately -3. We gratefully acknowledge support from NSF grant AST-0073447 and NASA grant NNG04GF25G
First Official Pluto Feature Names
2017-09-06
The International Astronomical Union (IAU), the internationally recognized authority for naming celestial bodies and their surface features, approved names of 14 surface features on Pluto in August 2017. The names were proposed by NASA's New Horizons team following the first reconnaissance of Pluto and its moons by the New Horizons spacecraft in 2015. The names, listed below, pay homage to the underworld mythology, pioneering space missions, historic pioneers who crossed new horizons in exploration, and scientists and engineers associated with Pluto and the Kuiper Belt. Tombaugh Regio honors Clyde Tombaugh (1906-1997), the U.S. astronomer who discovered Pluto in 1930 from Lowell Observatory in Arizona. Burney crater honors Venetia Burney (1918-2009), who as an 11-year-old schoolgirl suggested the name "Pluto" for Clyde Tombaugh's newly discovered planet. Later in life she taught mathematics and economics. Sputnik Planitia is a large plain named for Sputnik 1, the first space satellite, launched by the Soviet Union in 1957. Tenzing Montes and Hillary Montes are mountain ranges honoring Tenzing Norgay (1914-1986) and Sir Edmund Hillary (1919-2008), the Indian/Nepali Sherpa and New Zealand mountaineer were the first to reach the summit of Mount Everest and return safely. Al-Idrisi Montes honors Ash-Sharif al-Idrisi (1100-1165/66), a noted Arab mapmaker and geographer whose landmark work of medieval geography is sometimes translated as "The Pleasure of Him Who Longs to Cross the Horizons.†Djanggawul Fossae defines a network of long, narrow depressions named for the Djanggawuls, three ancestral beings in indigenous Australian mythology who traveled between the island of the dead and Australia, creating the landscape and filling it with vegetation. Sleipnir Fossa is named for the powerful, eight-legged horse of Norse mythology that carried the god Odin into the underworld. Virgil Fossae honors Virgil, one of the greatest Roman poets and Dante's fictional guide through hell and purgatory in the Divine Comedy. Adlivun Cavus is a deep depression named for Adlivun, the underworld in Inuit mythology. Hayabusa Terra is a large land mass saluting the Japanese spacecraft and mission (2003-2010) that performed the first asteroid sample return. Voyager Terra honors the pair of NASA spacecraft, launched in 1977, that performed the first "grand tour" of all four giant planets. The Voyager spacecraft are now probing the boundary between the Sun and interstellar space. Tartarus Dorsa is a ridge named for Tartarus, the deepest, darkest pit of the underworld in Greek mythology. Elliot crater recognizes James Elliot (1943-2011), an MIT researcher who pioneered the use of stellar occultations to study the solar system -- leading to discoveries such as the rings of Uranus and the first detection of Pluto's thin atmosphere. https://photojournal.jpl.nasa.gov/catalog/PIA21944
Pluto's Atmosphere, Then and Now
NASA Astrophysics Data System (ADS)
Elliot, J. L.; Buie, M.; Person, M. J.; Qu, S.
2002-09-01
The KAO light curve for the 1988 stellar occultation by Pluto exhibits a sharp drop just below half light, but above this level the light curve is consistent with that of an isothermal atmosphere (T = 105 +/- 8 K, with N2 as its major constituent). The sharp drop in the light curve has been interpreted as being caused by: (i) a haze layer, (ii) a large thermal gradient, or (iii) some combination of these two. Modeling Pluto's atmosphere with a haze layer yields a normal optical depth >= 0.145 (Elliot & Young 1992, AJ 103, 991). On the other hand, if Pluto's atmosphere is assumed to be clear, the occultation light curve can be inverted with a new method that avoids the large-body approximations. Inversion of the KAO light curve with this method yields an upper isothermal part, followed by a sharp thermal gradient that reaches a maximum magnitude of -3.9 +/- 0.6 K km-1 at the end of the inversion (r = 1206 +/- 10 km). Even though we do not yet understand the cause of the sharp drop, the KAO light curve can be used as a benchmark for examining subsequent Pluto occultation light curves to determine whether Pluto's atmospheric structure has changed since 1988. As an example, the Mamiña light curve for the 2002 July 20 Pluto occultation of P126A was compared with the KAO light curve by Buie et al. (this conference), who concluded that Pluto's atmospheric structure has changed significantly since 1988. Further analysis and additional light curves from this and subsequent occultations (e.g. 2002 August 21) will allow us to elucidate the nature of these changes. This work was supported, in part, by grants from NASA (NAG5-9008 and NAG5-10444) and NSF (AST-0073447).
The color red attracts attention in an emotional context. An ERP study.
Kuniecki, Michał; Pilarczyk, Joanna; Wichary, Szymon
2015-01-01
The color red is known to influence psychological functioning, having both negative (e.g., blood, fire, danger), and positive (e.g., sex, food) connotations. The aim of our study was to assess the attentional capture by red-colored images, and to explore the modulatory role of the emotional valence in this process, as postulated by Elliot and Maier (2012) color-in-context theory. Participants completed a dot-probe task with each cue comprising two images of equal valence and arousal, one containing a prominent red object and the other an object of different coloration. Reaction times were measured, as well as the event-related lateralizations of the EEG. Modulation of the lateralized components revealed that the color red captured and later held the attention in both positive and negative conditions, but not in a neutral condition. An overt motor response to the target stimulus was affected mainly by attention lingering over the visual field where the red cue had been flashed. However, a weak influence of the valence could still be detected in reaction times. Therefore, red seems to guide attention, specifically in emotionally-valenced circumstances, indicating that an emotional context can alter color's impact both on attention and motor behavior.
Sedimentology of the upper Karoo fluvial strata in the Tuli Basin, South Africa
NASA Astrophysics Data System (ADS)
Bordy, Emese M.; Catuneanu, Octavian
2001-08-01
The sedimentary rocks of the Karoo Supergroup in the Tuli Basin (South Africa) may be grouped in four stratigraphic units: the basal, middle and upper units, and the Clarens Formation. This paper presents the findings of the sedimentological investigation of the fluvial terrigenous clastic and chemical deposits of the upper unit. Evidence provided by primary sedimentary structures, palaeontological record, borehole data, palaeo-flow measurements and stratigraphic relations resulted in the palaeo-environmental reconstruction of the upper unit. The dominant facies assemblages are represented by sandstones and finer-grained sediments, which both can be interbedded with subordinate intraformational coarser facies. The facies assemblages of the upper unit are interpreted as deposits of a low-sinuosity, ephemeral stream system with calcretes and silcretes in the dinosaur-inhabited overbank area. During the deposition of the upper unit, the climate was semi-arid with sparse precipitation resulting in high-magnitude, low-frequency devastating flash floods. The current indicators of the palaeo-drainage system suggest flow direction from northwest to southeast, in a dominantly extensional tectonic setting. Based on sedimentologic and biostratigraphic evidence, the upper unit of the Tuli Basin correlates to the Elliot Formation in the main Karoo Basin to the south.
Eslami, Ahmad Ali; Amidi Mazaheri, Maryam; Mostafavi, Firoozeh; Abbasi, Mohamad Hadi; Noroozi, Ensieh
2014-01-01
Assessment of social skills is a necessary requirement to develop and evaluate the effectiveness of cognitive and behavioral interventions. This paper reports the cultural adaptation and psychometric properties of the Farsi version of the social skills rating system-secondary students form (SSRS-SS) questionnaire (Gresham and Elliot, 1990), in a normative sample of secondary school students. A two-phase design was used that phase 1 consisted of the linguistic adaptation and in phase 2, using cross-sectional sample survey data, the construct validity and reliability of the Farsi version of the SSRS-SS were examined in a sample of 724 adolescents aged from 13 to 19 years. Content validity index was excellent, and the floor/ceiling effects were low. After deleting five of the original SSRS-SS items, the findings gave support for the item convergent and divergent validity. Factor analysis revealed four subscales. RESULTS showed good internal consistency (0.89) and temporal stability (0.91) for the total scale score. Findings demonstrated support for the use of the 27-item Farsi version in the school setting. Directions for future research regarding the applicability of the scale in other settings and populations of adolescents are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parma, Edward J.; Naranjo, Gerald E.; Kaiser, Krista Irene
This document presents the facility-recommended characterization of the neutron, prompt gamma-ray, and delayed gamma-ray radiation fields in the Annular Core Research Reactor (ACRR) for the cadmium-polyethylene (CdPoly) bucket in the central cavity on the 32-inch pedestal at the core centerline. The designation for this environment is ACRR-CdPoly-CC-32-cl. The neutron, prompt gamma-ray, and delayed gamma-ray energy spectra, uncertainties, and covariance matrices are presented as well as radial and axial neutron and gamma-ray fluence profiles within the experiment area of the bucket. Recommended constants are given to facilitate the conversion of various dosimetry readings into radiation metrics desired by experimenters. Representative pulsemore » operations are presented with conversion examples. Acknowledgements The authors wish to thank the Annular Core Research Reactor staff and the Radiation Metrology Laboratory staff for their support of this work. Also thanks to Drew Tonigan for helping field the activation experiments in ACRR, David Samuel for helping to finalize the drawings and get the parts fabricated, and Elliot Pelfrey for preparing the active dosimetry plots.« less
A human language corpus for interstellar message construction
NASA Astrophysics Data System (ADS)
Elliott, John
2011-02-01
The aim of HuLCC (the human language chorus corpus), is to provide a resource of sufficient size to facilitate inter-language analysis by incorporating languages from all the major language families: for the first time all aspects of typology will be incorporated within a single corpus, adhering to a consistent grammatical classification and granularity, which historically adopt a plethora of disparate schemes. An added feature will be the inclusion of a common text element, which will be translated across all languages, to provide a precise comparable thread for detailed linguistic analysis for translation strategies and a mechanism by which these mappings can be explicitly achieved. Methods developed to solve unambiguous mappings across these languages can then be adopted for any subsequent message authored by the SETI community. Initially, it is planned to provide at least 20,000 words for each chosen language, as this amount of text exceeds the point where randomly generated text can be disambiguated from natural language and is of sufficient size useful for message transmission [1] (Elliot, 2002). This paper details the design of this resource, which ultimately will be made available to SETI upon its completion, and discusses issues 'core' to any message construction.
Coupled Model for CO2 Leaks from Geological Storage: Geomechanics, Fluid Flow and Phase Transitions
NASA Astrophysics Data System (ADS)
Gor, G.; Prevost, J.
2013-12-01
Deep saline aquifers are considered as a promising option for long-term storage of carbon dioxide. However, risk of CO2 leakage from the aquifers through faults, natural or induced fractures or abandoned wells cannot be disregarded. Therefore, modeling of various leakage scenarios is crucial when selecting a site for CO2 sequestration and choosing proper operational conditions. Carbon dioxide is injected into wells at supercritical conditions (t > 31.04 C, P > 73.82 bar), and these conditions are maintained in the deep aquifers (at 1-2 km depth) due to hydrostatic pressure and geothermal gradient. However, if CO2 and brine start to migrate from the aquifer upward, both pressure and temperature will decrease, and at the depth of 500-750 m, the conditions for CO2 will become subcritical. At subcritical conditions, CO2 starts boiling and the character of the flow changes dramatically due to appearance of the third (vapor) phase and latent heat effects. When modeling CO2 leaks, one needs to couple the multiphase flow in porous media with geomechanics. These capabilities are provided by Dynaflow, a finite element analysis program [1]; Dynaflow has already showed to be efficient for modeling caprock failure causing CO2 leaks [2, 3]. Currently we have extended the capabilities of Dynaflow with the phase transition module, based on two-phase and three-phase isenthalpic flash calculations [4]. We have also developed and implemented an efficient method for solving heat and mass transport with the phase transition using our flash module. Therefore, we have developed a robust tool for modeling CO2 leaks. In the talk we will give a brief overview of our method and illustrate it with the results of simulations for characteristic test cases. References: [1] J.H. Prevost, DYNAFLOW: A Nonlinear Transient Finite Element Analysis Program. Department of Civil and Environmental Engineering, Princeton University, Princeton, NJ. http://www.princeton.edu/~dynaflow/ (last update 2013), 1981. [2] M. Preisig, J.H. Prevost, Coupled multi-phase thermo-poromechanical effects. Case study: CO2 injection at In Salah, Algeria, International Journal of Greenhouse Gas Control, 5 (2011) 1055-1064. [3] G.Y. Gor, T.R. Elliot, J.H. Prevost, Effects of thermal stresses on caprock integrity during CO2 storage, International Journal of Greenhouse Gas Control, 12 (2013) 300-309. [4] M.L. Michelsen, J.M. Mollerup, Thermodynamic Models: Fundamentals and Computational Aspects. 2nd Edition, Tie-Line Publications, 2007.
Evolution and the neurosciences down-under.
Macmillan, Malcolm
2009-01-01
At the end of the nineteenth and the beginning of the twentieth century three Australians made notable contributions to founding the neurosciences: Alfred Walter Campbell (1868-1937) conducted the first extensive histological studies of the human brain; Grafton Elliot Smith (1871-1937) studied the monotreme brain and established the basis for understanding the mammalian brain; and Stanley David Porteus (1883-1972) extended his studies of intellectual disability to encompass the relation between brain size and intelligence. The work of each was decisively influenced by important members of the Edinburgh medical school or by Edinburgh medical graduates: William Turner (1832-1916) and William Rutherford (1839-1899) Professors of Anatomy and Physiology respectively at Edinburgh; James Thomas Wilson (1861-1945) Professor of Anatomy at the University of Sydney; and Richard James Arthur Berry (1867-1962) Professor of Anatomy at the University of Melbourne. An important aspect of the influence on the Australians was a materialist view of brain function but the work of all was most important for a theory even more central held by the Scots who had influenced them: Darwin's theory of evolution. The importance of the work of Campbell and especially that of Smith for Darwinism is contrasted with Darwin's own indifference to the peculiarities of the Australian fauna he observed when he visited Australia during HMS Beagle's voyage of discovery in 1836.
Adie, James W; Duda, Joan L; Ntoumanis, Nikos
2010-08-01
Grounded in the 2 x 2 achievement goal framework (Elliot & McGregor, 2001), the purpose of this study was to investigate the temporal relationships between achievement goals, competition appraisals and indices of psychological and emotional welfare among elite adolescent soccer players. A subsidiary aim was to ascertain the mediational role of competition appraisals in explaining the potential achievement goal and well-/ill-being relationships. Ninety-one boys (mean age = 13.82 years) involved in an elite soccer program completed multisection questionnaires capturing the targeted variables. Measures were obtained on five occasions across two competitive seasons. Multilevel regression analyses revealed that MAp goals positively, and MAv goals negatively, predicted within-person changes in well-being over two seasons. PAp goal adoption was positively associated to within-person changes in negative affect. PAv goals corresponded negatively to between-person mean differences in positive affect. The results of the indirect effects showed challenge appraisals accounted for within-person associations between a MAp goal focus and well- and ill-being over time. The present findings provide only partial support for the utility of the 2 x 2 achievement goal framework in predicting young athletes' psychological and emotional functioning in an elite youth sport setting.
W.E. Henry Symposium compendium: The importance of magnetism in physics and material science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carwell, H.
This compendium contains papers presented at the W. E. Henry Symposium, The Importance of Magnetism in Physics and Material Science. The one-day symposium was conducted to recognize the achievements of Dr. Warren Elliot Henry as educator, scientist, and inventor in a career spanning almost 70 years. Dr. Henry, who is 88 years old, attended the symposium. Nobel Laureate, Dr. Glenn Seaborg, a friend and colleague for over 40 years, attended the event and shared his personal reminiscences. Dr. Seaborg is Associate Director-At-Large at the Lawrence Berkeley National Laboratory. The Compendium begins with three papers which demonstrate the ongoing importance ofmore » magnetism in physics and material science. Other contributions cover the highlights of Dr. Henry`s career as a researcher, educator, and inventor. Colleagues and former students share insights on the impact of Dr. Henry`s research in the field of magnetism, low temperature physics, and solid state physics; his influence on students as an educator; and his character, intellect and ingenuity, and passion for learning and teaching. They share a glimpse of the environment and times that molded him as a man, and the circumstances under which he made his great achievements despite the many challenges he faced.« less
The motivational stories of how women become scientists: A hermeneutic phenomenological inquiry
NASA Astrophysics Data System (ADS)
Watson, Sandra White
2002-01-01
The under-representation of women in science careers is well documented (Astin, Green, Korn, & Riggs, 1991; Felder, Felder, Mauny, Hamrin, & Dietz, 1995; Green, 1989; National Science Foundation, 1996, 1998; Seymour & Hewitt, 1997; Strenta, Elliot, Adair, Scott, & Matier, 1994; Tobias, 1990, 1992). While important information has been published concerning various factors that influenced women to pursue science careers (American Association of University Women, 1992; Debacker & Nelson, 2000; Samuels, 1999), very few research projects have allowed women scientists to share their personal experiences of what motivated them to become scientists in their own voices. The purpose of this inquiry was to investigate the elicited stories of seven women research scientists so that their retrospective motivational experiences with science as girls and young women inside and outside the formal school setting might be better understood. This inquiry examined specific motivational factors and experiences that encouraged or discouraged these women to pursue careers in science. These factors included the motivational influences of gender perceptions, science experiences, and social interactions. From the collective experiences offered, emergent themes were identified and interpreted. These motivational themes were compared with motivational findings in the literature review. Educational implications of the identified themes for these and other women considering careers in science, women's parents, science educators and society, are discussed.
The color red attracts attention in an emotional context. An ERP study
Kuniecki, Michał; Pilarczyk, Joanna; Wichary, Szymon
2015-01-01
The color red is known to influence psychological functioning, having both negative (e.g., blood, fire, danger), and positive (e.g., sex, food) connotations. The aim of our study was to assess the attentional capture by red-colored images, and to explore the modulatory role of the emotional valence in this process, as postulated by Elliot and Maier (2012) color-in-context theory. Participants completed a dot-probe task with each cue comprising two images of equal valence and arousal, one containing a prominent red object and the other an object of different coloration. Reaction times were measured, as well as the event-related lateralizations of the EEG. Modulation of the lateralized components revealed that the color red captured and later held the attention in both positive and negative conditions, but not in a neutral condition. An overt motor response to the target stimulus was affected mainly by attention lingering over the visual field where the red cue had been flashed. However, a weak influence of the valence could still be detected in reaction times. Therefore, red seems to guide attention, specifically in emotionally-valenced circumstances, indicating that an emotional context can alter color’s impact both on attention and motor behavior. PMID:25972797
Value-Based Standards Guide Sexism Inferences for Self and Others.
Mitamura, Chelsea; Erickson, Lynnsey; Devine, Patricia G
2017-09-01
People often disagree about what constitutes sexism, and these disagreements can be both socially and legally consequential. It is unclear, however, why or how people come to different conclusions about whether something or someone is sexist. Previous research on judgments about sexism has focused on the perceiver's gender and attitudes, but neither of these variables identifies comparative standards that people use to determine whether any given behavior (or person) is sexist. Extending Devine and colleagues' values framework (Devine, Monteith, Zuwerink, & Elliot, 1991; Plant & Devine, 1998), we argue that, when evaluating others' behavior, perceivers rely on the morally-prescriptive values that guide their own behavior toward women. In a series of 3 studies we demonstrate that (1) people's personal standards for sexism in their own and others' behavior are each related to their values regarding sexism, (2) these values predict how much behavioral evidence people need to infer sexism, and (3) people with stringent, but not lenient, value-based standards get angry and try to regulate a sexist perpetrator's behavior to reduce sexism. Furthermore, these personal values are related to all outcomes in the present work above and beyond other person characteristics previously used to predict sexism inferences. We discuss the implications of differing value-based standards for explaining and reconciling disputes over what constitutes sexist behavior.
Rees, Amanda
2016-09-01
This paper explores how three central figures in the field of British prehistory - Sir Arthur Keith, Sir Grafton Elliot Smith and Louis Leakey - deployed different disciplinary practices and narrative devices in the popular accounts of human bio-cultural evolution that they produced during the early decades of the twentieth century. It shows how they used a variety of strategies, ranging from virtual witness through personal testimony to tactile demonstration, to ground their authority to interpret the increasingly wide range of fossil material available and to answer the bewildering variety of questions that could be asked about them. It investigates the way in which they positioned their own professional expertise in relation to fossil interpretation, particularly with regard to the - sometimes controversial - use they made of concepts, evidence and practices drawn from other disciplines. In doing so, they made claims that went beyond their original disciplinary boundaries. The paper argues that while none of these writers were able, ultimately, to support the wider claims they made regarding human prehistory, the nature of these claims deserves much closer attention, particularly with respect to the public role that historians of science can and should play in relation to present-day calls for greater interdisciplinarity.
Sikora, Joanna; Broncel, Marlena; Mikiciuk-Olasik, Elżbieta
2014-01-01
The aim of the study was to analyze the effects of two-month supplementation with chokeberry preparation on the activity of angiotensin I-converting enzyme (ACE) in patients with metabolic syndrome (MS). During the in vitro stage of the study, we determined the concentration of chokeberry extract, which inhibited the activity of ACE by 50% (IC50). The participants (n = 70) were divided into three groups: I-patients with MS who received chokeberry extract supplements, II-healthy controls, and III-patients with MS treated with ACE inhibitors. After one and two months of the experiment, a decrease in ACE activity corresponded to 25% and 30%, respectively. We documented significant positive correlations between the ACE activity and the systolic (r = 0.459, P = 0.048) and diastolic blood pressure, (r = 0.603, P = 0.005) and CRP. The IC50 of chokeberry extract and captopril amounted to 155.4 ± 12.1 μg/mL and 0.52 ± 0.18 μg/mL, respectively. Our in vitro study revealed that chokeberry extract is a relatively weak ACE inhibitor. However, the results of clinical observations suggest that the favorable hypotensive action of chokeberry polyphenols may be an outcome of both ACE inhibition and other pleotropic effects, for example, antioxidative effect.
Appel, Kurt; Meiser, Peter; Millán, Estrella; Collado, Juan Antonio; Rose, Thorsten; Gras, Claudia C; Carle, Reinhold; Muñoz, Eduardo
2015-09-01
Black chokeberry has been known to play a protective role in human health due to its high polyphenolic content including anthocyanins and caffeic acid derivatives. In the present study, we first characterized the polyphenolic content of a commercial chokeberry concentrate and investigated its effect on LPS-induced NF-κB activation and release of pro-inflammatory mediators in macrophages in the presence or the absence of sodium selenite. Examination of the phytochemical profile of the juice concentrate revealed high content of polyphenols (3.3%), including anthocyanins, proanthocyanidins, phenolic acids, and flavonoids. Among them, cyanidin-3-O-galactoside and caffeoylquinic acids were identified as the major compounds. Data indicated that chokeberry concentrate inhibited both the release of TNFα, IL-6 and IL-8 in human peripheral monocytes and the activation of the NF-κB pathway in RAW 264.7 macrophage cells. Furthermore, chokeberry synergizes with sodium selenite to inhibit NF-κB activation, cytokine release and PGE2 synthesis. These findings suggest that selenium added to chokeberry juice enhances significantly its anti-inflammatory activity, thus revealing a sound approach in order to tune the use of traditional herbals by combining them with micronutrients. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
González-Torre, Iván; Losada, Juan Carlos; Falconer, Ruth; Hapca, Simona; Tarquis, Ana M.
2015-04-01
Soil structure may be defined as the spatial arrangement of soil particles, aggregates and pores. The geometry of each one of these elements, as well as their spatial arrangement, has a great influence on the transport of fluids and solutes through the soil. Fractal/Multifractal methods have been increasingly applied to quantify soil structure thanks to the advances in computer technology (Tarquis et al., 2003). There is no doubt that computed tomography (CT) has provided an alternative for observing intact soil structure. These CT techniques reduce the physical impact to sampling, providing three-dimensional (3D) information and allowing rapid scanning to study sample dynamics in near real-time (Houston et al., 2013a). However, several authors have dedicated attention to the appropriate pore-solid CT threshold (Elliot and Heck, 2007; Houston et al., 2013b) and the better method to estimate the multifractal parameters (Grau et al., 2006; Tarquis et al., 2009). The aim of the present study is to evaluate the effect of the algorithm applied in the multifractal method (box counting and box gliding) and the cube size on the calculation of generalized fractal dimensions (Dq) in grey images without applying any threshold. To this end, soil samples were extracted from different areas plowed with three tools (moldboard, chissel and plow). Soil samples for each of the tillage treatment were packed into polypropylene cylinders of 8 cm diameter and 10 cm high. These were imaged using an mSIMCT at 155keV and 25 mA. An aluminium filter (0.25 mm) was applied to reduce beam hardening and later several corrections where applied during reconstruction. References Elliot, T.R. and Heck, R.J. 2007. A comparison of 2D and 3D thresholding of CT imagery. Can. J. Soil Sci., 87(4), 405-412. Grau, J, Médez, V.; Tarquis, A.M., Saa, A. and Díaz, M.C.. 2006. Comparison of gliding box and box-counting methods in soil image analysis. Geoderma, 134, 349-359. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Houston, A.N.; S. Schmidt, A.M. Tarquis, W. Otten, P.C. Baveye, S.M. Hapca. Effect of scanning and image reconstruction settings in X-ray computed tomography on soil image quality and segmentation performance. Geoderma, 207-208, 154-165, 2013a. Houston, A, Otten, W., Baveye, Ph., Hapca, S. Adaptive-Window Indicator Kriging: A Thresholding Method for Computed Tomography, Computers & Geosciences, 54, 239-248, 2013b. Tarquis, A.M., R.J. Heck, D. Andina, A. Alvarez and J.M. Antón. Multifractal analysis and thresholding of 3D soil images. Ecological Complexity, 6, 230-239, 2009. Tarquis, A.M.; D. Giménez, A. Saa, M.C. Díaz. and J.M. Gascó. Scaling and Multiscaling of Soil Pore Systems Determined by Image Analysis. Scaling Methods in Soil Systems. Pachepsky, Radcliffe and Selim Eds., 19-33, 2003. CRC Press, Boca Ratón, Florida. Acknowledgements First author acknowledges the financial support obtained from Soil Imaging Laboratory (University of Gueplh, Canada) in 2014.
New Occultation Systems and the 2005 July 11 Charon Occultation
NASA Astrophysics Data System (ADS)
Young, L. A.; French, R. G.; Gregory, B.; Olkin, C. B.; Ruhland, C.; Shoemaker, K.; Young, E. F.
2005-08-01
Charon's density is an important input to models of its formation and internal structure. Estimates range from 1.59 to 1.83 g/cm3 (Olkin et al. 2003. Icarus 164, 254), with Charon's radius as the main source of uncertainty. Reported values of Charon's radius from mutual events range from 593±13 (Buie et al. 1992, Icarus 97, 211) to 621±21 km (Young & Binzel 1994, Icarus 108), while an occultation observed from a single site gives a lower limit on the radius of 601.5 km (Walker 1980 MNRAS 192, 47; Elliot & Young 1991, Icarus 89, 244). On 2005 July 11 UT (following this abstract submission date), Charon is predicted to occult the star C313.2. If successful, this event will be the first Charon occultation observed since 1980, and the first giving multiple chords across Charon's disk. This event is expected to measure Charon's radius to 1 km. Our team is observing from three telescopes in Chile, the 4.0-m Blanco and the 0.9-m telescopes at Cerro Tololo and the 4.2-m SOAR telescope at Cerro Pachon. At SOAR, we will be using the camera from our new PHOT systems (Portable High-speed Occultation Telescopes). The PHOT camera is a Princeton Instrument MicroMAX:512BFT from Roper Scientific, a 512×512 frame-transfer CCD with a readnoise of only 3 electrons at the 100 kHz digitization rate. The camera's exposures are triggered by a custom built, compact, stand-alone GPS-based pulse-train generator. A PHOT camera and pulse-train generator were used to observe the occultation of 2MASS 1275723153 by Pluto on 2005 June 15 UT from Sommers-Bausch Observatory in Boulder Colorado; preliminary analysis shows this was at best a grazing occultation from this site and a successful engineering run for the July 11 Charon occultation. The work was supported, in part, by NSF AST-0321338 (EFY) and NASA NNG-05GF05G (LAY).
Humphrey, Neil; Barlow, Alexandra; Wigelsworth, Michael; Lendrum, Ann; Pert, Kirsty; Joyce, Craig; Stephens, Emma; Wo, Lawrence; Squires, Garry; Woods, Kevin; Calam, Rachel; Turner, Alex
2016-10-01
This randomized controlled trial (RCT) evaluated the efficacy of the Promoting Alternative Thinking Strategies curriculum (PATHS; Kusche & Greenberg, 1994) as a means to improve children's social-emotional competence (assessed via the Social Skills Improvement System (SSIS); Gresham & Elliot, 2008) and mental health outcomes (assessed via the Strengths and Difficulties Questionnaire (SDQ); Goodman, 1997). Forty-five schools in Greater Manchester, England, were randomly assigned to treatment and control groups. Allocation was balanced by proportions of children eligible for free school meals and speaking English as an additional language via minimization. Children (N=4516) aged 7-9years at baseline in the participating schools were the target cohort. During the two-year trial period, teachers of this cohort in schools allocated to the intervention group delivered the PATHS curriculum, while their counterparts in the control group continued their usual provision. Teachers in PATHS schools received initial training and on-going support and assistance from trained coaches. Hierarchical linear modeling of outcome data was undertaken to identify both primary (e.g., for all children) and secondary (e.g., for children classified as "at-risk") intervention effects. A primary effect of the PATHS curriculum was found, demonstrating increases in teacher ratings of changes in children's social-emotional competence. Additionally, secondary effects of PATHS were identified, showing reductions in teacher ratings of emotional symptoms and increases in pro-social behavior and child ratings of engagement among children identified as at-risk at baseline. However, our analyses also identified primary effects favoring the usual provision group, showing reductions in teacher ratings of peer problems and emotional symptoms, and secondary effects demonstrating reductions in teacher ratings of conduct problems and child ratings of co-operation among at-risk children. Effect sizes were small in all cases. These mixed findings suggest that social and emotional learning interventions such as PATHS may not be as efficacious when implemented outside their country of origin and evaluated in independent trials. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Where and What Is Pristine Marine Aerosol?
NASA Astrophysics Data System (ADS)
Russell, L. M.; Frossard, A. A.; Long, M. S.; Burrows, S. M.; Elliott, S.; Bates, T. S.; Quinn, P.
2014-12-01
The sources and composition of atmospheric marine aerosol particles have been measured by functional group composition (from Fourier transform infrared spectroscopy) to identify the organic composition of the pristine primary marine (ocean-derived) particles as 65% hydroxyl, 21% alkane, 6% amine, and 7% carboxylic acid functional groups [Frossard et al., 2014a,b]. Pristine but non-primary components from photochemical reactions (likely from biogenic marine vapor emissions) add carboxylic acid groups. Non-pristine contributions include shipping effluent in seawater and ship emissions, which add additional alkane groups (up to 70%), and coastal or continental emissions mix in alkane and carboxylic acid groups. The pristine primary marine (ocean-derived) organic aerosol composition is nearly identical to model generated primary marine aerosol particles from bubbled seawater, indicating that its overall functional group composition is the direct consequence of the organic constituents of the seawater source. While the seawater organic functional group composition was nearly invariant across all three ocean regions studied and the ratio of organic carbon to sodium (OC/Na+) in the generated primary marine aerosol particles remained nearly constant over a broad range of chlorophyll-a concentrations, the generated primary marine aerosol particle alkane group fraction increased with chlorophyll-a concentrations. In addition, the generated primary marine aerosol particles have a hydroxyl group absorption peak location characteristic of monosaccharides and disaccharides, where the seawater hydroxyl group peak location is closer to that of polysaccharides. References Cited Frossard, Amanda A., Lynn M. Russell, Paola Massoli, Timothy S. Bates, and Patricia K. Quinn, "Side-by-Side Comparison of Four Techniques Explains the Apparent Differences in the Organic Composition of Generated and Ambient Marine Aerosol Particles," Aerosol Science and Technology - Aerosol Research Letter, 48:v-x, doi10.1080/02786826.2013.879979, 2014a. Frossard, A.A., L.M. Russell, M.S. Long, S.M. Burrows, S.M. Elliot, T.S. Bates, and P.K. Quinn, "Sources and Composition of Submicron Organic Mass in Marine Aerosol Particles," Journal of Geophysical Research - Atmospheres, submitted 2014b.
NASA Astrophysics Data System (ADS)
Sunaryo, Geni R.; Katsumura, Yosuke; Ishigure, Kenkichi
1995-05-01
The G-values of water decomposition products under the irradiations with γ-rays and fast neutrons up to 250°C have been determined in previous studies. In order to clarify the characteristics of the determined G-values, computer simulations under the simplified conditions in nuclear reactors have been carried out. The recent G-values for γ-radiolysis reported by Elliot, Chenier and Quellete [(1990) Can. J. Chem.68, 712; (1993) J. Chem. Soc. Faraday Trans.89, 1193], Kent and Sims [(1992) Water Chemistry of Nuclear Reactor Systems 6, p. 153. BNES, London], and Sunaryo, Katsumura, Shirai, Hiroishi and Ishigure [(1994) Radiat. Phys. Chem.44, 273] and Sunaryo, Katsumura, Hiroishi and Ishigure [(1995) Radiat. Phys. Chem.45, 131] are almost equivalent from the point of simulations. On the contrary, G-values for fast neutron radiolysis give a significant influence to the result, which arises from the higher molecular yields and smaller radical yields of water decomposition in fast neutron radiolysis, and it has been revealed that the dose evaluation in the reactor is inevitably important. In addition, it was pointed out by the simulations that reverse reactions for H 2+ .OH→ .H+H 2O and e aq-+H +→ .H, be neglected at room temperature, become important at higher temperatures.
Infant color preference for red is not selectively context specific.
Franklin, Anna; Gibbons, Emily; Chittenden, Katie; Alvarez, James; Taylor, Chloe
2012-10-01
It has been proposed that human infants, like nonhuman primates, respond favorably to red in hospitable contexts, yet unfavorably in hostile contexts (Maier, Barchfeld, Elliot, & Pekrun, 2009). Here, we replicate and extend the study (Maier et al., 2009) whose findings have been used to support this hypothesis. As in Maier et al., 1-year-old infants were shown a photograph of a happy or angry face before pairs of colors were presented, yet in the current study, the set of stimuli crucially included two colors that are typically preferred by infants (red and blue). The percentage of times that infants looked first at the colors was analyzed for the two emotional "contexts." Following the happy face, infants looked first at red and blue equally, but significantly more than green. Following the angry face, the pattern of looking preference was the same as following the happy face, but the variation across the three colors was reduced. Contrary to Maier et al.'s hypothesis, there was no evidence that infants are selectively averse to red in angry contexts: following the angry face, "preference" for both red and blue was reduced, but was not significantly below chance. We therefore suggest an alternative account to Maier et al.'s evolutionary hypothesis, which argues that an angry face merely removes infant color preference, potentially due to the perceptual characteristics of the angry face disrupting infants' encoding of color.
Sikora, Joanna; Broncel, Marlena; Mikiciuk-Olasik, Elżbieta
2014-01-01
Purpose. The aim of the study was to analyze the effects of two-month supplementation with chokeberry preparation on the activity of angiotensin I-converting enzyme (ACE) in patients with metabolic syndrome (MS). During the in vitro stage of the study, we determined the concentration of chokeberry extract, which inhibited the activity of ACE by 50% (IC50). Methods. The participants (n = 70) were divided into three groups: I—patients with MS who received chokeberry extract supplements, II—healthy controls, and III—patients with MS treated with ACE inhibitors. Results. After one and two months of the experiment, a decrease in ACE activity corresponded to 25% and 30%, respectively. We documented significant positive correlations between the ACE activity and the systolic (r = 0.459, P = 0.048) and diastolic blood pressure, (r = 0.603, P = 0.005) and CRP. The IC50 of chokeberry extract and captopril amounted to 155.4 ± 12.1 μg/mL and 0.52 ± 0.18 μg/mL, respectively. Conclusions. Our in vitro study revealed that chokeberry extract is a relatively weak ACE inhibitor. However, the results of clinical observations suggest that the favorable hypotensive action of chokeberry polyphenols may be an outcome of both ACE inhibition and other pleotropic effects, for example, antioxidative effect. PMID:25050143
Effects of Aronia melanocarpa Fruit Juice on Isolated Rat Hepatocytes.
Kondeva-Burdina, Magdalena; Valcheva-Kuzmanova, Stefka; Markova, Tsvetelina; Mitcheva, Mitka; Belcheva, Anna
2015-10-01
Aronia melanocarpa (Michx.) Elliot fruits are very rich in polyphenols - procyanidins, flavonoids, and phenolic acids. On rat hepatocytes, isolated by two-stepped collagenase perfusion, we investigated the effect of A. melanocarpa fruit juice (AMFJ) in two models of liver toxicity caused by (i) metabolic bioactivation of carbon tetrachloride (CCl4), and (ii) tert-butyl hydroperoxide (t-BuOOH)-induced oxidative stress. Isolated rat hepatocytes are a suitable model for hepatotoxicity studies. We determined the main parameters of the functional and metabolic status of rat hepatocytes: Cell viability (measured by trypan blue exclusion) and the levels of lactate dehydrogenase (LDH), reduced glutathione (GSH), and malondialdehyde (MDA). These parameters were used to investigate the protective effects of AMFJ in the two toxicity models. The effects of AMFJ were compared with those of silymarin. The cells were treated either with AMFJ or silymarin at increasing concentrations of 5 μg/ml, 10 μg/ml, 30 μg/ml, 50 μg/ml, and 100 μg/ml which were used for measuring of IC50. In both toxicity models - CCl4 and t-BuOOH, AMFJ showed statistically significant cytoprotective and antioxidant activities. AMFJ prevented the loss of cell viability and GSH depletion, decreased LDH leakage and MDA production. The effects of AMFJ at the concentrations of 5, 10, 30, and 50 μg/ml were similar to those of the same concentrations of silymarin, while the effect of the highest AMFJ concentration of 100 μg/ml was higher than that of the same silymarin concentration. The effects were concentration-dependent and more prominent in the t-BuOOH model, compared to those in the CCl4 model. The cytoprotective and antioxidant effects of AMFJ established in this study might be due to its polyphenolic ingredients, which could influence the cytochrome P450-mediated metabolism of the experimental hepatotoxic substances (CCl4 and t-BuOOH) and could act as free radical scavengers. The stronger effects of the highest AMFJ concentration in comparison with that of silymarin were possibly due to the combined presence of different polyphenols in the juice. On rat hepatocytes, isolated by two-stepped collagenase perfusion, we investigated the effect of Aronia melanocarpa fruit juice (AMFJ) in two models of liver toxicity caused by i) metabolic bioactivation of carbon tetrachloride (CCl4), and ii) tert-butyl hydroperoxide (t-BuOOH)-induced oxidative stress. In both toxicity models - CCl4 and t-BuOOH, AMFJ showed statistically significant cytoprotective and antioxidant activities. AMFJ prevented the loss of cell viability and GSH depletion, decreased LDH leakage and MDA production. The effects of AMFJ at the concentrations of 5, 10, 30, and 50 μg/ml were similar to those of the same concentrations of silymarin, while the effect of the highest AMFJ concentration of 100 μg/ml was higher than that of the same silymarin concentration. The effects were concentration-dependent and were more prominent in the t-BuOOH model, compared to those in the CCl4 model.
Effects of Aronia melanocarpa Fruit Juice on Isolated Rat Hepatocytes
Kondeva-Burdina, Magdalena; Valcheva-Kuzmanova, Stefka; Markova, Tsvetelina; Mitcheva, Mitka; Belcheva, Anna
2015-01-01
Background: Aronia melanocarpa (Michx.) Elliot fruits are very rich in polyphenols – procyanidins, flavonoids, and phenolic acids. Objective: On rat hepatocytes, isolated by two-stepped collagenase perfusion, we investigated the effect of A. melanocarpa fruit juice (AMFJ) in two models of liver toxicity caused by (i) metabolic bioactivation of carbon tetrachloride (CCl4), and (ii) tert-butyl hydroperoxide (t-BuOOH)-induced oxidative stress. Materials and Methods: Isolated rat hepatocytes are a suitable model for hepatotoxicity studies. We determined the main parameters of the functional and metabolic status of rat hepatocytes: Cell viability (measured by trypan blue exclusion) and the levels of lactate dehydrogenase (LDH), reduced glutathione (GSH), and malondialdehyde (MDA). These parameters were used to investigate the protective effects of AMFJ in the two toxicity models. The effects of AMFJ were compared with those of silymarin. The cells were treated either with AMFJ or silymarin at increasing concentrations of 5 μg/ml, 10 μg/ml, 30 μg/ml, 50 μg/ml, and 100 μg/ml which were used for measuring of IC50. Results: In both toxicity models – CCl4 and t-BuOOH, AMFJ showed statistically significant cytoprotective and antioxidant activities. AMFJ prevented the loss of cell viability and GSH depletion, decreased LDH leakage and MDA production. The effects of AMFJ at the concentrations of 5, 10, 30, and 50 μg/ml were similar to those of the same concentrations of silymarin, while the effect of the highest AMFJ concentration of 100 μg/ml was higher than that of the same silymarin concentration. The effects were concentration-dependent and more prominent in the t-BuOOH model, compared to those in the CCl4 model. Conclusion: The cytoprotective and antioxidant effects of AMFJ established in this study might be due to its polyphenolic ingredients, which could influence the cytochrome P450-mediated metabolism of the experimental hepatotoxic substances (CCl4 and t-BuOOH) and could act as free radical scavengers. The stronger effects of the highest AMFJ concentration in comparison with that of silymarin were possibly due to the combined presence of different polyphenols in the juice. SUMMARY On rat hepatocytes, isolated by two-stepped collagenase perfusion, we investigated the effect of Aronia melanocarpa fruit juice (AMFJ) in two models of liver toxicity caused by i) metabolic bioactivation of carbon tetrachloride (CCl4), and ii) tert-butyl hydroperoxide (t-BuOOH)-induced oxidative stress. In both toxicity models – CCl4 and t-BuOOH, AMFJ showed statistically significant cytoprotective and antioxidant activities. AMFJ prevented the loss of cell viability and GSH depletion, decreased LDH leakage and MDA production. The effects of AMFJ at the concentrations of 5, 10, 30, and 50 μg/ml were similar to those of the same concentrations of silymarin, while the effect of the highest AMFJ concentration of 100 μg/ml was higher than that of the same silymarin concentration. The effects were concentration-dependent and were more prominent in the t-BuOOH model, compared to those in the CCl4 model. PMID:27013800
Cook, David A; Castillo, Richmond M; Gas, Becca; Artino, Anthony R
2017-10-01
Measurement of motivation and cognitive load has potential value in health professions education. Our objective was to evaluate the validity of scores from Dweck's Implicit Theories of Intelligence Scale (ITIS), Elliot's Achievement Goal Questionnaire-Revised (AGQ-R) and Leppink's cognitive load index (CLI). This was a validity study evaluating internal structure using reliability and factor analysis, and relationships with other variables using the multitrait-multimethod matrix. Two hundred and thirty-two secondary school students participated in a medical simulation-based training activity at an academic medical center. Pre-activity ITIS (implicit theory [mindset] domains: incremental, entity) and AGQ-R (achievement goal domains: mastery-approach, mastery-avoidance, performance-approach, performance-avoidance), post-activity CLI (cognitive load domains: intrinsic, extrinsic, germane) and task persistence (self-directed repetitions on a laparoscopic surgery task) were measured. Internal consistency reliability (Cronbach's alpha) was > 0.70 for all domain scores except AGQ-R performance-avoidance (alpha 0.68) and CLI extrinsic load (alpha 0.64). Confirmatory factor analysis of ITIS and CLI scores demonstrated acceptable model fit. Confirmatory factor analysis of AGQ-R scores demonstrated borderline fit, and exploratory factor analysis suggested a three-domain model for achievement goals (mastery-approach, performance and avoidance). Correlations among scores from conceptually-related domains generally aligned with expectations, as follows: ITIS incremental and entity, r = -0.52; AGQ-R mastery-avoidance and performance-avoidance, r = 0.71; mastery-approach and performance-approach, r = 0.55; performance-approach and performance-avoidance, r = 0.43; mastery-approach and mastery-avoidance, r = 0.36; CLI germane and extrinsic, r = -0.35; ITIS incremental and AGQ-R mastery-approach, r = 0.34; ITIS incremental and CLI germane, r = 0.44; AGQ-R mastery-approach and CLI germane, r = 0.48 (all p < 0.001). We found no correlation between the number of task repetitions (i.e. persistence) and mastery-approach scores, r = -0.01. ITIS and CLI scores had appropriate internal structures and relationships with other variables. AGQ-R scores fit a three-factor (not four-factor) model that collapsed avoidance into one domain, although relationships of other variables with the original four domain scores generally aligned with expectations. Mastery goals are positively correlated with germane cognitive load. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
The Atmospheric Structure of Triton and Pluto
NASA Technical Reports Server (NTRS)
Elliot, James L.
1998-01-01
The goal of this research was to better determine the atmospheric structures of Triton and Pluto through further analysis of three occultation data sets obtained with the Kuiper Airborne Observatory (KAO.) As the research progressed, we concentrated our efforts on the Triton data, as this appeared to be the most fruitful. Three papers have been prepared as a result of this research. The first paper presents new results about Triton's atmospheric structure from the analysis of all ground-based stellar occultation data recorded to date, including one single-chord occultation recorded on 1993 July 10 and nine occultation lightcurves from the double-star event on 1995 August 14. These stellar occultation observations made both in the visible and in the infrared have good spatial coverage of Triton, including the first Triton central-flash observations, and are the first data to probe the altitude level 20-100 km on Triton. The small-planet lightcurve model of J. L. Elliot and L. A. Young was generalized to include stellar flux refracted by the far limb, and then fitted to the data. Values of the pressure, derived from separate immersion and emersion chords, show no significant trends with latitude, indicating that Triton's atmosphere is spherically symmetric at approximately 50 km altitude to within the error of the measurements; however, asymmetry observed in the central flash indicates the atmosphere is not homogenous at the lowest levels probed (approximately 20 km altitude). From the average of the 1995 occultation data, the equivalent isothermal temperature of the atmosphere is 47 plus or minus 1 K and the atmospheric pressure at 1400 km radius (approximately 50 km altitude) is 1.4 plus or minus 0.1 microbar. Both of these are not consistent with a model based on Voyager UVS and RSS observations in 1989. The atmospheric temperature from the occultation is 5 K colder than that predicted by the model and the observed pressure is a factor of 1.8 greater than the model. In our opinion, the disagreement in temperature and pressure is probably due to modeling problems at the microbar level, since measurements at this level have not previously been made. Alternatively, the difference could be due to seasonal change in Triton's atmospheric structure. The second paper reports observations of a recent stellar occultation by Triton which, when combined with earlier results, show that Triton has undergone a period of global warming since 1989. The most conservative estimates of the rate of temperature and surface-pressure increase during this period imply that the atmosphere is doubling in bulk every 10 years -- significantly faster than predicted by published frost model for Triton. Our results suggests that permanent polar caps on Triton play a dominant role in regulating seasonal atmospheric changes. Similar processes should also be active on Pluto. A third paper 'Global Warming on Triton' will appear in a the January 1999 issue of Sky and Telescope.
NASA Astrophysics Data System (ADS)
Elliot, J. L.
2002-09-01
Pluto's tenuous atmosphere -- detected with a widely observed stellar occultation in 1988 (Millis et al., 1993, Icarus 105, 282) -- consists primarily of N2, with trace amounts of CO and CH4. The N2 gas is in vapor-pressure equilibrium with surface ice, which should maintain a uniform temperature for the N2 ice on the surface of the body. Data from the Kuiper Airborne Observatory (KAO) for the 1988 occultation showed Pluto's middle atmosphere to be isothermal at about 105 K for at least a scale height above a radius of about 1215 km (Pluto's surface radius is 1175 +/- 25 km; Tholen & Buie 1997, in Pluto and Charon, 193). This temperature can be explained with radiative-conductive models (e.g. Yelle & Lunine 1989, Nature 339, 288; Strobel et al. 1996, Icarus 120 266), using the spectroscopically measured amount of CH4 (Young et al. 1997, Icarus, 127 258). Below the isothermal region there is an abrupt drop in the KAO occultation light curve, which has been interpreted as being caused either by (i) an absorption layer, or (ii) a sharp thermal gradient. As Pluto recedes from the sun, the diminishing solar flux provides less energy for sublimation, which may lead to a substantial drop in surface pressure. On the other hand, the emissivity change that accompanies the α - β phase transition for N2 ice may leave the surface pressure relatively unchanged from its present value (Stansberry & Yelle 1999, Icarus 141, 299). Stellar occultation observations were successfully carried out in 2002 July and August (Sicardy et al., Buie et al., and Elliot et al., this conference) from a large number of telescopes: the IRTF, UH 2.2 m, UH 0.6 m, UKIRT, CFHT, Lick 3 m, Lowell 1.8 m, Palomar 5 m, as well as 0.35 m and smaller portable telescopes. The wavelengths of these observations ranged from the visible to near IR. These new data give us a snapshot of Pluto's atmospheric structure 14 years after the initial observations and reveal changes in the structure of Pluto's atmosphere. Occultation research at MIT is supported, in part, by NASA (NAG5-10444) and NSF (AST-0073447).
NASA Astrophysics Data System (ADS)
Bosh, A. S.; Olkin, C. B.
1996-06-01
On 21 November 1995, Saturn and its rings occulted the star GSC5249-01240 (Bosh & McDonald 1992, Astron. J. 103, 983). Although the star is relatively faint (V = 11.9), other circumstances conspired to make this an excellent event: (i) the normally-bright rings were dark because the sun was crossing through the ring plane, reducing the amount of ring contribution to the background noise and therefore increasing the observed S/N, (ii) the ring opening angle was small (B ~ 3deg ), enhancing detection of low-optical-depth material, and (iii) the low sky-plane velocity allowed longer integration times without loss of spatial resolution. Thus this occultation was particularly well-suited to produce high S/N detections of low-tau ring material. We observed this atmosphere and ring occultation with the Faint Object Spectrograph (FOS) on the Hubble Space Telescope. Using the FOS in its high-speed mode, we sampled the starlight with the G650L grating, recording the stellar signal as a function of both wavelength and time. For the initial analysis of these data, the spectral information was sacrificed by binning all wavelengths together; this in turn increased the detected S/N. We performed a geometric solution for the event, using the known locations of circular ring features as fiducials (Elliot et al., Astron. J. 106, 2544). The scattered light from Saturn and the rings was modelled and subtracted from the light curves to obtain line-of-sight optical depth as a function of ring-plane radius. With these processed data we have made the first occultation detection of Saturn's innermost and very tenuous D ring. We find a line-of-sight optical depth for the thickest part of this ring of tau_ {obs} ~ 0.02. The location and morphology of this feature will be discussed. Comparison of the observed structure will be made with the previous Voyager imaging detection of this ring (Smith et al. 1981, Science 212, 163; Marley & Porco 1993, Icarus 106, 508).
NASA Technical Reports Server (NTRS)
Stern, Jennifer C.; Foustoukos, Dionysis I.; Sonke, Jeroen E.; Salters, Vincent J. M.
2014-01-01
The mobility of metals in soils and subsurface aquifers is strongly affected by sorption and complexation with dissolved organic matter, oxyhydroxides, clay minerals, and inorganic ligands. Humic substances (HS) are organic macromolecules with functional groups that have a strong affinity for binding metals, such as actinides. Thorium, often studied as an analog for tetravalent actinides, has also been shown to strongly associate with dissolved and colloidal HS in natural waters. The effects of HS on the mobilization dynamics of actinides are of particular interest in risk assessment of nuclear waste repositories. Here, we present conditional equilibrium binding constants (Kc, MHA) of thorium, hafnium, and zirconium-humic acid complexes from ligand competition experiments using capillary electrophoresis coupled with ICP-MS (CE- ICP-MS). Equilibrium dialysis ligand exchange (EDLE) experiments using size exclusion via a 1000 Damembrane were also performed to validate the CE-ICP-MS analysis. Experiments were performed at pH 3.5-7 with solutions containing one tetravalent metal (Th, Hf, or Zr), Elliot soil humic acid (EHA) or Pahokee peat humic acid (PHA), and EDTA. CE-ICP-MS and EDLE experiments yielded nearly identical binding constants for the metal- humic acid complexes, indicating that both methods are appropriate for examining metal speciation at conditions lower than neutral pH. We find that tetravalent metals form strong complexes with humic acids, with Kc, MHA several orders of magnitude above REE-humic complexes. Experiments were conducted at a range of dissolved HA concentrations to examine the effect of [HA]/[Th] molar ratio on Kc, MHA. At low metal loading conditions (i.e. elevated [HA]/[Th] ratios) the ThHA binding constant reached values that were not affected by the relative abundance of humic acid and thorium. The importance of [HA]/[Th] molar ratios on constraining the equilibrium of MHA complexation is apparent when our estimated Kc, MHA values attained at very low metal loading conditions are compared to existing literature data. Overall, experimental data suggest that the tetravalent transition metal/-actinide-humic acid complexation is important over a wide range of pH values, including mildly acidic conditions, and thus, these complexes should be included in speciation models.
The Relationship Between KBO Colors and Kuiper-belt Plane Inclination
NASA Astrophysics Data System (ADS)
Kane, J. F.; Gulbis, A. A. S.; Elliot, J. L.
2005-08-01
The colors of Kuiper belt objects (KBOs) can indicate different compositions, environmental conditions, or formation characteristics within the Kuiper belt. Photometric color observations of these objects, combined with dynamical information, can provide insight into their composition, the extent to which space-weathering or impact gardening have played a role in surface modification, and the processes at work during the formation of our solar system. Data from the Deep Ecliptic Survey (DES; Millis et al., 2002, AJ, 123, 2083) have been used to determine the plane of the Kuiper belt, identifying "core" and "halo" populations with respect to this plane (Elliot et al. 2005, AJ, 129, 1117). Gulbis et al. (2005, Icarus, submitted) found the colors of the core KBOs, those having inclinations within approximately 4.6 degrees of the Kuiper-belt plane, to be primarily red, unlike the halo objects. We have combined newly obtained Sloan g', r', and i' observations from the 6.5-m Clay telescope at Las Campanas Observatory of 12 KBOs with previously published data to examine the transition between these populations as a function of color. By comparing the colors of objects as a function of inclination, we can establish trends distinguishing the core and halo populations. For inclination bins containing equal numbers of KBOs, we find that the percentage of red objects (B-R > median B-R of the sample) decreases in a smooth, but nonlinear fashion. This research is partially supported by an MIT fellowship, an NSF GSRF and NSF grant AST0406493.
Lupia, E; Elliot, S J; Lenz, O; Zheng, F; Hattori, M; Striker, G E; Striker, L J
1999-08-01
Nonobese diabetic (NOD) mice develop glomerulosclerosis shortly after the onset of diabetes. We showed that mesangial cells (MCs) from diabetic mice exhibited a stable phenotypic switch, consisting of both increased IGF-1 synthesis and proliferation (Elliot SJ, Striker LJ, Hattori M, Yang CW, He CJ, Peten EP, Striker GE: Mesangial cells from diabetic NOD mice constitutively secrete increased amounts of insulin-like growth factor-I. Endocrinology 133:1783-1788, 1993). Because the extracellular matrix (ECM) accumulation in diabetic glomerulosclerosis may be partly due to decreased degradation, we examined the effect of excess IGF-1 on collagen turnover and the activity of metalloproteinases (MMPs) and tissue inhibitors of metalloproteinase (TIMPs) in diabetic and nondiabetic NOD-MC. Total collagen degradation was reduced by 58 +/- 18% in diabetic NOD-MCs, which correlated with a constitutive decrease in MMP-2 activity and mRNA levels, and nearly undetectable MMP-9 activity and mRNA. TIMP levels were slightly decreased in diabetic NOD-MC. The addition of recombinant IGF-1 to nondiabetic NOD-MC resulted in a decrease in MMP-2 and TIMP activity. Furthermore, treatment of diabetic NOD-MC with a neutralizing antibody against IGF-1 increased the latent form, and restored the active form, of MMP-2. In conclusion, the excessive production of IGF-1 contributes to the altered ECM turnover in diabetic NOD-MC, largely through a reduction of MMP-2 activity. These data suggest that IGF-1 could be a major contributor to the development of diabetic glomerulosclerosis.
How do we know when patients sleep properly or why they do not?
Sjöberg, Folke; Svanborg, Eva
2013-05-15
The importance of adequate sleep for good health and immune system function is well documented as is reduced sleep quality experienced by ICU patients. In the previous issue of Critical Care, Elliot and co-workers present a well done, largest of its kind, single-center study on sleep patterns in critically ill patients. They base their study on the 'gold standard', the polysomnography technique, which is resource demanding to perform and often difficult to evaluate. The results are especially interesting as the authors not only used polysomnography in a large sample but also, in contrast to others, excluded patients with prior sleep problems. They also recorded patients' subjective sleep experiences in the ICU and thereafter in the ward (validated questionnaires) with simultaneous data collection of factors known to affect sleep in the ICU (mainly treatment interventions, light and sound disturbances). Interestingly, but not surprisingly, sleep was both quantitatively and qualitatively poor. Furthermore, there seemed to be little or no improvement over time when compared to earlier studies. This study stresses the magnitude of the sleep problem despite interventions such as earplugs and/or eyeshades. Sound disturbance was found to be the most significant but improvable factor. The study highlights the challenge and the importance of evaluating sleep in the critical care setting and the present need for alternative methods to measure it. All that in conjunction can be used to solve an important problem for this patient group.
2018-01-01
Massospondylus carinatus is a basal sauropodomorph dinosaur from the early Jurassic Elliot Formation of South Africa. It is one of the best-represented fossil dinosaur taxa, known from hundreds of specimens including at least 13 complete or nearly complete skulls. Surprisingly, the internal cranial anatomy of M. carinatus has never been described using computed tomography (CT) methods. Using CT scans and 3D digital representations, we digitally reconstruct the bones of the facial skeleton, braincase, and palate of a complete, undistorted cranium of M. carinatus (BP/1/5241). We describe the anatomical features of the cranial bones, and compare them to other closely related sauropodomorph taxa such as Plateosaurus erlenbergiensis, Lufengosaurus huenei, Sarahsaurus aurifontanalis and Efraasia minor. We identify a suite of character states of the skull and braincase for M. carinatus that sets it apart from other taxa, but these remain tentative due to the lack of comparative sauropodomorph braincase descriptions in the literature. Furthermore, we hypothesize 27 new cranial characters useful for determining relationships in non-sauropodan Sauropodomorpha, delete five pre-existing characters and revise the scores of several existing cranial characters to make more explicit homology statements. All the characters that we hypothesized or revised are illustrated. Using parsimony as an optimality criterion, we then test the relationships of M. carinatus (using BP/1/5241 as a specimen-level exemplar) in our revised phylogenetic data matrix. PMID:29340238
On the Vertical Thermal Structure of Pluto's Atmosphere
NASA Astrophysics Data System (ADS)
Strobel, Darrell F.; Zhu, Xun; Summers, Michael E.; Stevens, Michael H.
1996-04-01
A radiative-conductive model for the vertical thermal structure of Pluto's atmosphere is developed with a non-LTE treatment of solar heating in the CH43.3 μm and 2.3 μm bands, non-LTE radiative exchange and cooling in the CH47.6 μm band, and LTE cooling by CO rotational line emission. The model includes the effects of opacity and vibrational energy transfer in the CH4molecule. Partial thermalization of absorbed solar radiation in the CH43.3 and 2.3 μm bands by rapid vibrational energy transfer from the stretch modes to the bending modes generates high altitude heating at sub-microbar pressures. Heating in the 2.3 μm bands exceeds heating in 3.3 μm bands by approximately a factor of 6 and occurs predominantly at microbar pressures to generate steep temperature gradients ∼10-20 K km-1forp> 2 μbar when the surface or tropopause pressure is ∼3 μbar and the CH4mixing ratio is a constant 3%. This calculated structure may account for the "knee" in the stellar occultation lightcurve. The vertical temperature structure in the first 100 km above the surface is similar for atmospheres with Ar, CO, and N2individually as the major constituent. If a steep temperature gradient ∼20 K km-1is required near the surface or above the tropopause, then the preferred major constituent is Ar with 3% CH4mixing ratio to attain a calculated ratio ofT/M(= 3.5 K amu-1) in agreement with inferred values from stellar occultation data. However, pure Ar and N2ices at the same temperature yield an Ar vapor pressure of only ∼0.04 times the N2vapor pressure. Alternative scenarios are discussed that may yield acceptable fits with N2as the dominant constituent. One possibility is a 3 μbar N2atmosphere with 0.3% CH4that has 106 K isothermal region (T/M= 3.8 K amu-1) and ∼8 K km-1surface/tropopause temperature gradient. Another possibility would be a higher surface pressure ∼10 μbar with a scattering haze forp> 2 μbar. Our model with appropriate adjustments in the CH4density profile to Triton's inferred profile yields a temperature profile consistent with the UVS solar occultation data (Krasnopolsky, V. A., B. R. Sandel, and F. Herbert 1992.J. Geophys. Res.98, 3065-3078.) and ground-based stellar occultation data (Elliot, J. L., E. W. Dunham, and C. B. Olkin 1993.Bull. Am. Astron. Soc.25, 1106.).
NASA Astrophysics Data System (ADS)
de Winter, Niels; Goderis, Steven; van Malderen, Stijn; Vanhaecke, Frank; Claeys, Philippe
2017-04-01
Understanding the Late Cretaceous greenhouse climate is of vital importance for understanding present and future climate change. While a lot of good work has been done to reconstruct climate in this interesting period, most paleoclimatic studies have focused on long-term climate change[1]. Alternatively, multi-proxy records from marine bivalves provide us with a unique opportunity to study past climate on a seasonal scale. However, previous fossil bivalve studies have reported ambiguous results with regard to the interpretation of trace element and stable isotope proxies in marine bivalve shells[2]. One major problem in the interpretation of such records is the bivalve's vital effect and the occurrence of disequilibrium fractionation during bivalve growth. Both these problems are linked to the annual growth cycle of marine bivalves, which introduces internal effects on the incorporation of isotopes and trace elements into the shell[3]. Understanding this growth cycle in extinct bivalves is therefore of great importance for the interpretation of seasonal proxy records in their shells. In this study, three different species of extinct Late Campanian bivalves (two rudist species and one oyster species) that were found in the same stratigraphic interval are studied. Micro-X-Ray Fluorescence line scanning and mapping of trace elements such as Mg, Sr, S and Zn, calibrated by LA-ICP-MS measurements, is combined with microdrilled stable carbon and oxygen isotope analysis on the well-preserved part of the shells. Data of this multi-proxy study is compared with results from a numerical growth model written in the open-source statistics package R[4] and based on annual growth increments observed in the shells and shell thickness. This growth model is used together with proxy data to reconstruct rates of trace element incorporation into the shell and to calculate the mass balance of stable oxygen and carbon isotopes. In order to achieve this goal, 2D mapping of bivalve shell surfaces is combined with high-precision point measurements and linescans to characterize different carbonate facies within the shell and to model changes in proxy data in three dimensions. Comparison of sub-annual variations in growth rate and shell geometry with proxy data sheds light on the degree to which observed seasonal variations in geochemical proxies are dependent on internal mechanisms of shell growth as opposed to external mechanisms such as climatic and environmental change. The use of three different species of bivalve from the same paleoenvironment allows the examination of species-specific responses to environmental change. This study attempts to determine which proxies in which species of bivalve are suitable for paleoenvironmental reconstruction and will aid future paleoseasonality studies in interpreting seasonally resolved multi-proxy records. References 1 DeConto R.M., et al. Cambridge University Press; 2000. 2 Elliot M, et al., PPP 2009. 3 Steuber T. Geology. 1996. 4 R core team, 2004, www.R-project.org
NASA Astrophysics Data System (ADS)
Sun, Yanhong; Liu, Jianguo; Zhang, Xiaoli; Lin, Wei
2008-05-01
Two strains H2-410 and H2-419 were obtained from the chemically mutated survivors of wild Haematococcus pluvialis 2 by using ethyl methanesulphonate (EMS). Strains H2-410 and H2-419 showed a fast cell growth with 13% and 20% increase in biomass compared to wild type, respectively. Then H2-419-4, a fast cell growth and high astaxanthin accumulation strain, was obtained by exposing the strain H2-419 to ultraviolet radiation (UV) further. The total biomass, the astaxanthin content per cell, astaxanthin production of H2-419-4 showed 68%, 28%, and 120% increase compared to wild H. pluvialis 2, respectively. HPLC (High Performance Liquid Chromatography) data showed also an obvious proportional variation of different carotenoid compositions in the extracts of H2-419-4 and the wild type, although no peak of carotenoids appeared or disappeared. Therefore, the main compositions in strain H2-419-4, like its wild one, were free of astaxanthin, monoester, and diester of astaxanthin. The asexual reproduction in survivors after exposed to UV was not synchronous, and different from the normal synchronous asexual reproduction as the mother cells were motile instead of non-motile. Interestingly, some survivors from UV irradiation produced many mini-spores (or gamete?), the spores moved away from the mother cell gradually 4 or 5 days later. This is quite similar to sexual reproduction described by Elliot in 1934. However, whether this was sexual reproduction remains questionable, as no mating process has been observed.
Evaluation of a Music Therapy Social Skills Development Program for Youth with Limited Resources.
Pasiali, Varvara; Clark, Cherie
2018-05-21
Children living in low-resource communities are at risk for poorer socio-emotional development and academic performance. Emerging evidence supports use of group music therapy experiences to support social development through community afterschool programming. To examine the potential benefit of a music therapy social skills development program to improve social skills and academic performance of school-aged children with limited resources in an afterschool program. We used a single-group pre/post-test design, and recruited 20 students (11 females, 9 males), ages 5 to 11 years, from an afterschool program. The music therapy social skills program consisted of eight 50-minute sessions, and we measured social competence and antisocial behavior using the Home & Community Social Behavioral Scale (HCSBS; Merrell & Caldarella, 2008), and social skills, problem behaviors, and academic competence using the Social Skills Improvement System (SSIS; Gresham & Elliot, 2008a, 2008b). Only students who attended a minimum of six sessions (N = 14) were included in data analysis. Results showed no significant change in individual HBSC subscale scores; however, the total number of low-performance/high-risk skills significantly decreased. SSIS teacher results indicated significant improvement in communication, significant decrease of hyperactivity, autistic behavioral tendencies and overall problem behaviors, and marginal decreases in internalization. Parent ratings mirrored, in part, those of the teacher. Results indicated that music therapy has the potential of being an effective intervention for promoting social competence of school-aged children with limited resources, particularly in the areas of communication and low-performance/high-risk behaviors. Teaching skills through song lyrics and improvisation emerged as salient interventions.
NASA Astrophysics Data System (ADS)
Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.
Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.
Lithium isotope geochemistry and origin of Canadian shield brines.
Bottomley, D J; Chan, L H; Katz, A; Starinsky, A; Clark, I D
2003-01-01
Hypersaline calcium/chloride shield brines are ubiquitous in Canada and areas of northern Europe. The major questions relating to these fluids are the origin of the solutes and the concentration mechanism that led to their extreme salinity. Many chemical and isotopic tracers are used to solve these questions. For example, lithium isotope systematics have been used recently to support a marine origin for the Yellowknife shield brine (Northwest Territories). While having important chemical similarities to the Yellowknife brine, shield brines from the Sudbury/Elliot Lake (Ontario) and Thompson/Snow Lake (Manitoba) regions, which are the focus of this study, exhibit contrasting lithium behavior. Brine from the Sudbury Victor mine has lithium concentrations that closely follow the sea water lithium-bromine concentration trajectory, as well as delta6Li values of approximately -28/1000. This indicates that the lithium in this brine is predominantly marine in origin with a relatively minor component of crustal lithium leached from the host rocks. In contrast, the Thompson/Snow Lake brine has anomalously low lithium concentrations, indicating that it has largely been removed from solution by alteration minerals. Furthermore, brine and nonbrine mine waters at the Thompson mine have large delta6Li variations of approximately 30/1000, which primarily reflects mixing between deep brine with delta6Li of -35 +/- 2/1000 and near surface mine water that has derived higher delta6Li values through interactions with their host rocks. The contrary behavior of lithium in these two brines shows that, in systems where it has behaved conservatively, lithium isotopes can distinguish brines derived from marine sources.
N-15 NMR study of the immobilization of 2,4- and 2,6-dinitrotoluene in aerobic compost
Thorn, K.A.; Pennington, J.C.; Kennedy, K.R.; Cox, L.G.; Hayes, C.A.; Porter, B.E.
2008-01-01
Large-scale aerobic windrow composting has been used to bioremediate washout lagoon soils contaminated with the explosives TNT (2,4,6- trinitrotoluene) and RDX (hexahydro-1,3,5-trinitro-1,3,5-triazine) at several sites within the United States. We previously used 15N NMR to investigate the reduction and binding of T15NT in aerobic bench -scale reactors simulating the conditions of windrow composting. These studies have been extended to 2,4-dinitrotoluene (2,4DNT) and 2,6-dinitrotoluene (2,6DNT), which, as impurities in TNT, are usually present wherever soils have been contaminated with TNT. Liquid-state 15N NMR analyses of laboratory reactions between 4-methyl-3-nitroaniline-15N, the major monoamine reduction product of 2,4DNT, and the Elliot soil humic acid, both in the presence and absence of horseradish peroxidase, indicated that the amine underwent covalent binding with quinone and other carbonyl groups in the soil humic acid to form both heterocyclic and non-heterocyclic condensation products. Liquid-state 15N NMR analyses of the methanol extracts of 20 day aerobic bench-scale composts of 2,4-di-15N-nitrotoluene and 2,6-di-15N-nitrotoluene revealed the presence of nitrite and monoamine, but not diamine, reduction products, indicating the occurrence of both dioxygenase enzyme and reductive degradation pathways. Solid-state CP/MAS 15N NMR analyses of the whole composts, however, suggested that reduction to monoamines followed by covalent binding of the amines to organic matter was the predominant pathway. ?? 2008 American Chemical Society.
NASA Astrophysics Data System (ADS)
Bordy, Emese M.; Segwabe, Tebogo; Makuke, Bonno
2010-08-01
The Mosolotsane Formation (Lebung Group, Karoo Supergroup) in the Kalahari Karoo Basin of Botswana is a scantly exposed, terrestrial red bed succession which is lithologically correlated with the Late Triassic to Early Jurassic Molteno and Elliot Formations (Karoo Supergroup) in South Africa. New evidence derived from field observations and borehole data via sedimentary facies analysis allowed the assessment of the facies characteristics, distribution and thickness variation as well as palaeo-current directions and sediment composition, and resulted in the palaeo-environmental reconstruction of this poorly known unit. Our results show that the Mosolotsane Formation was deposited in a relatively low-sinuosity meandering river system that drained in a possibly semi-arid environment. Sandstone petrography revealed mainly quartz-rich arenites that were derived from a continental block provenance dominated by metamorphic and/or igneous rocks. Palaeo-flow measurements indicate reasonably strong, unidirectional current patterns with mean flow directions from southeast and east-southeast to northwest and west-northwest. Regional thickness and facies distributions as well as palaeo-drainage indicators suggest that the main depocenter of the Mosolotsane Formation was in the central part of the Kalahari Karoo Basin. Separated from this main depocenter by a west-northwest - east-southeast trending elevated area, an additional depocenter was situated in the north-northeast part of the basin and probably formed part of the Mid-Zambezi Karoo Basin. In addition, data also suggests that further northeast-southwest trending uplands probably existed in the northwest and east, the latter separating the main Kalahari Karoo depocenter from the Tuli Basin.
Feuerbacher, Erica N; Wynne, Clive D L
2014-05-01
Previous research has indicated both petting (McIntire & Colley, 1967) and food (Feuerbacher & Wynne, 2012) have reinforcing effects on dog behavior and support social behavior towards humans (food: Elliot & King, 1960; social interaction: Brodbeck, 1954). Which type of interaction dogs prefer and which might produce the most social behavior from a dog has not been investigated. In the current study, we assessed how dogs allocated their responding in a concurrent choice between food and petting. Dogs received five 5-min sessions each. In Session 1, both food and petting were continuously delivered contingent on the dog being near the person providing the respective consequence. Across the next three sessions, we thinned the food schedule to a Fixed Interval (FI) 15-s, FI 1-min, and finally extinction. The fifth session reversed back to the original food contingency. We tested owned dogs in familiar (daycare) and unfamiliar (laboratory room) environments, and with their owner or a stranger as the person providing petting. In general, dogs preferred food to petting when food was readily available and all groups showed sensitivity to the thinning food schedule by decreasing their time allocation to food, although there were group and individual differences in the level of sensitivity. How dogs allocated their time with the petting alternative also varied. We found effects of context, familiarity of the person providing petting, and relative deprivation from social interaction on the amount of time dogs allocated to the petting alternative. © Society for the Experimental Analysis of Behavior.
Low LET radiolysis escape yields for reducing radicals and H2 in pressurized high temperature water
NASA Astrophysics Data System (ADS)
Sterniczuk, Marcin; Yakabuskie, Pamela A.; Wren, J. Clara; Jacob, Jasmine A.; Bartels, David M.
2016-04-01
Low Linear Energy Transfer (LET) radiolysis escape yields (G values) are reported for the sum (G(radH)+G(e-)aq) and for G(H2) in subcritical water up to 350 °C. The scavenger system 1-10 mM acetate/0.001 M hydroxide/0.00048 M N2O was used with simultaneous mass spectroscopic detection of H2 and N2 product. Temperature-dependent measurements were carried out with 2.5 MeV electrons from a van de Graaff accelerator, while room temperature calibration measurements were done with a 60Co gamma source. The concentrations and dose range were carefully chosen so that initial spur chemistry is not perturbed and the N2 product yield corresponds to those reducing radicals that escape recombination in pure water. In comparison with a recent review recommendation of Elliot and Bartels (AECL report 153-127160-450-001, 2009), the measured reducing radical yield is seven percent smaller at room temperature but in fairly good agreement above 150 °C. The H2 escape yield is in good agreement throughout the temperature range with several previous studies that used much larger radical scavenging rates. Previous analysis of earlier high temperature measurements of Gesc(radOH) is shown to be flawed, although the actual G values may be nearly correct. The methodology used in the present report greatly reduces the range of possible error and puts the high temperature escape yields for low-LET radiation on a much firmer quantitative foundation than was previously available.
Badescu, Magda; Badulescu, Oana; Badescu, Laurentiu; Ciocoiu, Manuela
2015-04-01
The fruits of Aronia melanocarpa Elliot (Rosaceae), (black chokeberry), and Sambucus nigra L. (Caprifoliaceae), elderberries are rich in anthocyanins. Many studies have reported that anthocyanins are beneficial in diabetes due to their capacity to stimulate insulin secretion and reduce oxidative stress. The purpose of this study is to prove the biologically active properties of polyphenols extracted from S. nigra and A. melanocarpa fruit. The study also details the influence of plant polyphenols on immune system imbalances within diabetes mellitus. Polyphenolic extract was administered to Wistar rats 0.040 g/kg body every 2 d for 16 weeks. The absorbencies of all the solutions were determined using a V-550 Able Jasco UV-VIS spectrophotometer. The immunomodulatory capacity of vegetal extracts was assessed by studying cytokines TNF-α and IFN-γ through the ELISA method and fibrinogen values. At 48 h, the anti-inflammatory effects of S. nigra and A. melanocarpa substances have been revealed by an increase of the TNF-α and IFN-γ levels in the diabetic group protected by these extracts. Seventy-two hours post-administration of both substances in the diabetic groups, the TNF-α level returns to the values read 24 h after substance administration. The vegetal extracts limit the production of fibrinogen in the diabetic rats under polyphenolic protection, the values being highly significant compared with the diabetic group. Natural polyphenols extracted from S. nigra and A. melanocarpa modulate specific and non-specific immune defenses in insulin-deficiency diabetes and reduce the inflammatory status and self-sustained pancreatic insulitis.
Planetary Rings: a Brief History of Observation and Theory
NASA Astrophysics Data System (ADS)
Nicholson, P. D.
2000-05-01
Over several centuries, and extending down to today, the ring systems encircling Saturn and the other jovian planets have provided an endless source of speculation and theorizing for astronomers, theologians, and physicists. In the past two decades they have also become a testing ground for dynamical models of more distant astrophysical disks, such as those which surround protostars and even the stellar disks of spiral galaxies. I will review some of the early theories, and their sometimes rude confrontation with observational data, starting with Christiaan Huygens and touching on seminal contributions by Laplace, Bessel, Maxwell, Barnard, Russell (of H-R diagram fame) and Jeffreys. In the modern era, observations at infrared and radio wavelengths have revealed Saturn's rings to be composed of large chunks of almost pure water ice, and to have a vertical thickness measured in tens of meters. A renaissance in planetary rings studies occurred in the period 1977--1981, first with the discoveries of the narrow, dark and non-circular rings of Uranus and the tenuous jovian ring system, and capped off by the spectacular images returned during the twin Voyager flybys of Saturn. Along with the completely unsuspected wealth of detail these observations revealed came an unwelcome problem: are the rings ancient or are we privileged to live at a special time in history? The answer to this still-vexing question may lie in the complex gravitational interactions recent studies have revealed between the rings themselves and their retinues of attendant satellites. Between the four known ring systems, we see elegant examples of Lindblad and corotation resonances (first invoked in the galactic context), electromagnetic resonances, many-armed spiral density waves and bending waves, narrow ringlets which exhibit internal modes due to a collective instability, sharp-edged gaps maintained via tidal torques from embedded moonlets, and tenuous dust belts created by meteoroid impact onto parent bodies. I will conclude with a glimpse at what may well be a dynamicist's worst nightmare --- Saturn's multi-stranded, kinky and clumpy F ring, which continues to puzzle 20 years after it was first seen. The author would like to acknowledge many discussions with Joe Burns, Jeff Cuzzi, Luke Dones, Jim Elliot, Dick French, Peter Goldreich, Mark Showalter and Bruno Sicardy, as well as generous support from NASA.
Lee, Yun-Kyung; Hur, Jin
2017-08-01
Knowledge of the heterogeneous distribution of humic substances (HS) reactivities along a continuum of molecular weight (MW) is crucial for the systems where the HS MW is subject to change. In this study, two dimensional correlation spectroscopy combined with size exclusion chromatography (2D-CoSEC) was first utilized to obtain a continuous and heterogeneous presence of copper binding characteristics within bulk HS with respect to MW. HS solutions with varying copper concentrations were directly injected into a size exclusion chromatography (SEC) system with Tris-HCl buffer as a mobile phase. Several validation tests confirmed neither structural disruption of HS nor competition effect of the mobile phase used. Similar to batch systems, fluorescence quenching was observed in the chromatograms over a wide range of HS MW. 2D-CoSEC maps of a soil-derived HS (Elliot soil humic acid) showed the greater fluorescence quenching degrees with respect to the apparent MW on the order of 12500 Da > 10600 Da > 7000 Da > 15800 Da. The binding constants calculated based on modified Stern-Volmer equation were consistent with the 2D-CoSEC results. More heterogeneity of copper binding affinities within bulk HS was found for the soil-derived HS versus an aquatic HS. The traditional fluorescence quenching titration method using ultrafiltered HS size fractions failed to delineate detailed distribution of the copper binding characteristics, exhibiting a much shorter range of the binding constants than those obtained from the 2D-CoSEC. Our proposed technique demonstrated a great potential to describe metal binding characteristics of HS at high MW resolution, providing a clear picture of the size-dependent metal-HS interactions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reaction times and anticipatory skills of karate athletes.
Mori, Shuji; Ohtani, Yoshio; Imanaka, Kuniyasu
2002-07-01
Two experiments were conducted to investigate the reaction times (RTs) and anticipation of karate athletes. In Experiment 1, choice RTs and simple RTs were measured with two types of stimuli. One was videotaped scenes of opponent's offensive actions, which simulated the athletes' view in real situations, and the other was static filled circles, or dots. In the choice RT task, participants were required to indicate as soon as possible whether the offensive actions would be aimed at the upper or middle level of their body, or the dot was presented either at a higher or a lower position. In the simple RT task, they were required to respond as soon as possible when the offensive action started from a static display of the opponent's ready stance, or a dot appeared on the display. The results showed significant differences between the karate athletes and the novices in the choice RT task, the difference being more marked for the video stimuli than for the dot stimuli. There was no significant difference in simple RT between the two groups of participants, for either type of stimuli. In Experiment 2, the proportions of correct responses (PCRs) were measured for video stimuli which were cut off at the seventh frame from the onset of the opponent's offensive action. The athletes yielded significantly higher PCRs than the novices. Collectively the results of the two experiments demonstrate the superior anticipatory skills of karate athletes regarding the target area of an opponent's attack (Scott, Williams, & Davids, Studies in perception and action II: Posters presented at the VIIth International conference on Event Perception and Action, Erlbaum, Hillsdale, NJ, 1993, p. 217; Wiiliams & Elliot, Journal of Sport & Exercise Psychology 21 (1999) 362), together with their advantage over novices in non-specific sensory functions (e.g., vertical discrimination).
Buckley-Geer, E. J.; Lin, H.; Drabek, E. R.; ...
2011-11-03
We report on the serendipitous discovery in the Blanco Cosmology Survey (BCS) imaging data of a z = 0.9057 galaxy that is being strongly lensed by a massive galaxy cluster at a redshift of z = 0.3838. The lens (BCS J2352-5452) was discovered while examining i- and z-band images being acquired in October 2006 during a BCS observing run. Follow-up spectroscopic observations with the GMOS instrument on the Gemini South 8m telescope confirmed the lensing nature of this system. Using weak plus strong lensing, velocity dispersion, cluster richness N 200, and fitting to an NFW cluster mass density profile, wemore » have made three independent estimates of the mass M 200 which are all very consistent with each other. The combination of the results from the three methods gives M 200 = (5.1 x 1.3) x 10 14 circle_dot, which is fully consistent with the individual measurements. The final NFW concentration c 200 from the combined fit is c 200 = 5.4 -1.1 +1.4. We have compared our measurements of M 200 and c 200 with predictions for (a) clusters from λCDM simulations, (b) lensing selected clusters from simulations, and (c) a real sample of cluster lenses. We find that we are most compatible with the predictions for λCDM simulations for lensing clusters, and we see no evidence based on this one system for an increased concentration compared to λCDM. Finally, using the flux measured from the [OII]3727 line we have determined the star formation rate (SFR) of the source galaxy and find it to be rather modest given the assumed lens magnification.« less
American views of Sir Victor Horsley in the era of Cushing.
Lehner, Kurt R; Schulder, Michael
2018-03-09
Sir Victor Horsley was a pioneering British neurosurgeon known for his numerous neurosurgical, scientific, and sociopolitical contributions. Although word of these surgical and scientific achievements quickly spread throughout Europe and North America in the late 19th century, much of modern neurosurgery's view of Horsley has been colored by a single anecdote from John Fulton's biography of Harvey Cushing. In this account, Cushing observes a frenetic Horsley hastily removing a Gasserian ganglion from a patient in the kitchen of a British mansion. Not long after, Cushing left Britain saying that he had little to learn from British neurosurgery. The authors of this paper examined contemporary views of Horsley to assess what his actual reputation was in the US and Canada. The authors conducted a thorough search of references to Horsley using the following sources: American surgical and neurosurgical textbooks; major biographies; diary entries and letters; PubMed; newspaper articles; and surgical and neurosurgical texts. The positive reception of his work is corroborated by invitations for Horsley to speak in America. Research additionally revealed that Horsley had numerous personal and professional relationships with prominent Americans in medicine, including William Osler, John Wheelock Elliot, Ernest Sachs, and (yes) Harvey Cushing. Horsley's contributions to medicine and science were heavily reported in American newspapers; outside of neurosurgery, his strong opposition to the antivivisectionists and his support for alcohol prohibition were widely reported in popular media. Horsley's contributions to neurosurgery in America are undeniable. Writings from and about prominent Americans reveal that he was viewed favorably by those who had met him. Frequent publication of his views in the American media suggests that medical professionals and the public in the US valued his contributions on scientific as well as social issues. Horsley died too young, but not without the international recognition that was rightly his.
Inskeep, William P.; Jay, Zackary J.; Macur, Richard E.; Clingenpeel, Scott; Tenney, Aaron; Lovalvo, David; Beam, Jacob P.; Kozubal, Mark A.; Shanks, W. C.; Morgan, Lisa A.; Kan, Jinjun; Gorby, Yuri; Yooseph, Shibu; Nealson, Kenneth
2015-01-01
Yellowstone Lake (Yellowstone National Park, WY, USA) is a large high-altitude (2200 m), fresh-water lake, which straddles an extensive caldera and is the center of significant geothermal activity. The primary goal of this interdisciplinary study was to evaluate the microbial populations inhabiting thermal vent communities in Yellowstone Lake using 16S rRNA gene and random metagenome sequencing, and to determine how geochemical attributes of vent waters influence the distribution of specific microorganisms and their metabolic potential. Thermal vent waters and associated microbial biomass were sampled during two field seasons (2007–2008) using a remotely operated vehicle (ROV). Sublacustrine thermal vent waters (circa 50–90°C) contained elevated concentrations of numerous constituents associated with geothermal activity including dissolved hydrogen, sulfide, methane and carbon dioxide. Microorganisms associated with sulfur-rich filamentous “streamer” communities of Inflated Plain and West Thumb (pH range 5–6) were dominated by bacteria from the Aquificales, but also contained thermophilic archaea from the Crenarchaeota and Euryarchaeota. Novel groups of methanogens and members of the Korarchaeota were observed in vents from West Thumb and Elliot's Crater (pH 5–6). Conversely, metagenome sequence from Mary Bay vent sediments did not yield large assemblies, and contained diverse thermophilic and nonthermophilic bacterial relatives. Analysis of functional genes associated with the major vent populations indicated a direct linkage to high concentrations of carbon dioxide, reduced sulfur (sulfide and/or elemental S), hydrogen and methane in the deep thermal ecosystems. Our observations show that sublacustrine thermal vents in Yellowstone Lake support novel thermophilic communities, which contain microorganisms with functional attributes not found to date in terrestrial geothermal systems of YNP. PMID:26579074
Searching for Solar System Wide Binaries with Pan-STARRS-1
NASA Astrophysics Data System (ADS)
Holman, Matthew J.; Protopapas, P.; Tholen, D. J.
2007-10-01
Roughly 60% of the observing time of the Pan-STARRS-1 (PS1) telescope will be dedicated to a "3pi steradian" survey with an observing cadence that is designed for the detection of near-Earth asteroids and slow-moving solar system bodies. Over this course of its 3.5 year cience mission, this unprecedented survey will discover nearly every asteroid, Trojan, Centaur, long-period comet, short-period comet, and trans-neptunian object (TNO) brighter than magnitude R=23. This census will be used to address a large number of questions regarding the physical and dynamical properties of the various small body populations of the solar system. Roughly 1-2% of TNOs are wide binaries with companions at separations greater than 1 arcsec and brightness differences less than 2 magnitudes (Kern & Elliot 2006; Noll et al 2007). These can be readily detected by PS1; we will carry out such a search with PS1 data. To do so, we will modify the Pan-STARRS Moving Object Processing System (MOPS) such that it will associate the components of resolved or marginally resolved binaries, link such pairs of detections obtained at different epochs, and the estimate the relative orbit of the binary. We will also determine the efficiency with which such binaries are detected as a function of the binary's relative orbit and the relative magnitudes of the components. Based on an estimated 7000 TNOs that PS1 will discover, we anticipate finding 70-140 wide binaries. The PS1 data, 60 epochs over three years, is naturally suited to determining the orbits of these objects. Our search will accurately determine the binary fraction for a variety of subclasses of TNOs.
NASA Astrophysics Data System (ADS)
Rogers, Raymond R.; Rogers, Kristina Curry; Munyikwa, Darlington; Terry, Rebecca C.; Singer, Bradley S.
2004-10-01
Karoo-equivalent rocks in the Tuli Basin of Zimbabwe are described, with a focus on the dinosaur-bearing Mpandi Formation, which correlates with the Elliot Formation (Late Triassic-Early Jurassic) in the main Karoo Basin. Isolated exposures of the Mpandi Formation along the banks of the Limpopo River consist of red silty claystones and siltstones that preserve root traces, small carbonate nodules, and hematite-coated prosauropod bones. These fine-grained facies accumulated on an ancient semi-arid floodplain. Widespread exposures of quartz-rich sandstone and siltstone representing the upper Mpandi Formation crop out on Sentinel Ranch. These strata preserve carbonate concretions and silicified root casts, and exhibit cross-bedding indicative of deposition via traction currents, presumably in stream channels. Prosauropod fossils are also preserved in the Sentinel Ranch exposures, with one particularly noteworthy site characterized by a nearly complete and articulated Massospondylus individual. An unconformity caps the Mpandi Formation in the study area, and this stratigraphically significant surface rests on a laterally-continuous zone of pervasive silicification interpreted as a silcrete. Morphologic, petrographic, and geochemical data indicate that the Mpandi silcrete formed by intensive leaching near the ground surface during prolonged hiatus. Chert clasts eroded from the silcrete are intercalated at the base of the overlying Samkoto Formation (equivalent to the Clarens Formation in the main Karoo Basin), which in turn is overlain by the Tuli basalts. These basalts, which are part of the Karoo Igneous Province, yield a new 40Ar/ 39Ar plateau age of 186.3 ± 1.2 Ma.
Bordy, Emese M.; Reid, Mhairi; Abrahams, Miengah
2016-01-01
Footprint morphology (e.g., outline shape, depth of impression) is one of the key diagnostic features used in the interpretation of ancient vertebrate tracks. Over 80 tridactyl tracks, confined to the same bedding surface in the Lower Jurassic Elliot Formation at Mafube (eastern Free State, South Africa), show large shape variability over the length of the study site. These morphological differences are considered here to be mainly due to variations in the substrate rheology as opposed to differences in the trackmaker’s foot anatomy, foot kinematics or recent weathering of the bedding surface. The sedimentary structures (e.g., desiccation cracks, ripple marks) preserved in association with and within some of the Mafube tracks suggest that the imprints were produced essentially contemporaneous and are true dinosaur tracks rather than undertracks or erosional remnants. They are therefore valuable not only for the interpretation of the ancient environment (i.e., seasonally dry river channels) but also for taxonomic assessments as some of them closely resemble the original anatomy of the trackmaker’s foot. The tracks are grouped, based on size, into two morphotypes that can be identified as Eubrontes-like and Grallator-like ichnogenera. The Mafube morphotypes are tentatively attributable to large and small tridactyl theropod trackmakers, possibly to Dracovenator and Coelophysis based on the following criteria: (a) lack of manus impressions indicative of obligate bipeds; (b) long, slender-digits that are asymmetrical and taper; (c) often end in a claw impression or point; and (d) the tracks that are longer than broad. To enable high-resolution preservation, curation and subsequent remote studying of the morphological variations of and the secondary features in the tracks, low viscosity silicone rubber was used to generate casts of the Mafube tracks. PMID:27635310
NASA Astrophysics Data System (ADS)
Sromovsky, L. A.; Fry, P. M.; Baines, K. H.; Dowling, T. E.
2001-02-01
Near-IR groundbased observations coordinated with Wide Field Planetary Camera 2 (WFPC2) HST observations (Sromovsky et al.Icarus149, 416-434, 459-488) provide new insights into the variations of Neptune and Triton over a variety of time scales. From 1996 WFPC2 imaging we find that a broad circumpolar nonaxisymmetric dark band dominates Neptune's lightcurve at 0.467 μm, while three discrete bright features dominate the lightcurve at longer wavelengths, with amplitudes of 0.5% at 0.467 μm and 22% at 0.89 μm, but of opposite phases. The 0.89-μm modulation in 1994, estimated at 39%, is close to the 50% modulation observed during the 1986 "outburst" documented by Hammel et al. (1992, Icarus99, 363-367), suggesting that the unusual 1994 cloud morphology might also have been present in 1986. Lightcurve amplitudes in J-K bands, from August 1996 IRTF observations, are comparable to those observed in 1977 (D. P. Cruikshank 1978, Astrophys. J. Lett.220, 57-59) but significantly larger than the 1981 amplitudes of M. J. S. Belton et al. (1981, Icarus45, 263-273). The 1996 disk-integrated albedos of Neptune at H-K wavelengths are 2-7 times smaller than the 1977 values of U. Fink and S. Larson (1979, Astrophys. J.233, 1021-1040), which can be explained with about 1/2-1/4 of the upper level cloud opacity being present in 1996. A simplified three-layer model of cloud structure applied to CCD wavelengths implies ˜7% reflectivity at 1.3 bars (at λ=0.55 μm, decreasing as λ -0.94) and ˜1% at 100-150 mbars. To fit the WFPC2 observations and those of E. Karkoschka (1994, Icarus111, 174-192), the putative H 2S cloud between 3.8 and 7-9 bars must have a strong decrease in reflectivity between 0.5 and 0.7 μm, as previously determined by K. H. Baines and W. H. Smith (1990, Icarus85, 65-108). To match our 1996 IRTF results, this cloud must have another substantial drop in reflectivity at near-IR wavelengths, to a level of 0-5%, corresponding to single-scattering albedos of ˜0-0.3. The model that fits our near-IR observations on 13 August 1996 can reproduce the magnitudes of the dramatic 1976 "outburst" (R. R. Joyce et al. 1977, Astrophys. J.214, 657-662) by increasing the upper cloud fraction to 6% (from ˜1%) and lowering its effective pressure to ˜90 mbars (from 151 mbars). Triton's disk-integrated albedo from HST imagery at 11 wavelengths from 0.25 to 0.9 μm are consistent with previous groundbased and Voyager measurements, thus providing no evidence for the albedo decrease suggested by Triton's recent warming (J. L. Elliot et al. 1998 Nature393, 765-767). Triton's lightcurve inferred from 1994-1996 WFPC2 observations has about twice the amplitude inferred from 1989 Voyager models for the UV to long visible range (J. Hillier et al. 1991, J. Geophys. Res.96, 19,211-19,215).
NASA Astrophysics Data System (ADS)
Burberry, C. M.; Cannon, D. L.; Engelder, T.; Cosgrove, J. W.
2010-12-01
The Sawtooth Range forms part of the Montana Disturbed Belt in the Front Ranges of the Rocky Mountains, along strike from the Alberta Syncline in the Canadian Rockies. The belt developed in the footwall to the Lewis Thrust during the Sevier orogeny and is similar in deformation style to the Canadian Foothills, with a series of stacked thrust sheets carrying Palaeozoic carbonates. The Sawtooth Range can be divided into an inner and outer deformed belt, separated by exposed fold structures in the overlying clastic sequence. Structures in the deformed belts plunge into the culmination of the NE-trending Scapegoat-Bannatyne trend, part of the Great Falls Tectonic Zone (GFTZ). Other mapped faults, including the Pendroy fault zone to the north, parallel this trend. A number of mechanisms have been proposed for the development of primary arcs in fold-thrust belts, including linkage of two thrust belts with different strikes, differential transport of segments of the belt, the geometry of the indentor, local plate heterogeneity and pre-existing basement configuration. Arcuate belts may also develop as a result of later bending of an initially straight orogen. In the Swift Dam area, part of the outer belt of the Sawtooth Range, the strike of the belt changes from 165 to 150. This apparent change in strike is accommodated by a sinistral lateral ramp in the Swift Dam Thrust. In addition, this outer belt becomes broader to the north in the Swift Dam region. However, the outer belt becomes extremely narrow in the Teton Canyon region to the south, and the deformation front is characterised by an intercutaneous wedge structure, rather than the trailing-edge imbricate fan seen to the north. A similar imbricate fan structure is seen to the south, in the Sun River Canyon region, corresponding well to the classic model of a deformation belt governed by a dominant thrust sheet, after Boyer & Elliot. The Sawtooth Range can be described as an active-roof duplex in the footwall to the dominant Lewis thrust slab. Analysis of the transport directions of the thrust sheets in the Range implies that the inner arcuate belt is a secondary arc, but that the later, outer arcuate belt formed by divergent transport. This two-stage development model is strongly influenced by the basement configuration. The deformation front of the outer arc is governed by NNW-striking Proterozoic normal fault structures. The entire Sawtooth Range duplex is uplifted over an earlier, NE-trending basement structure (the GFTZ), forming a termination in the Lewis slab. The interaction of these two fault trends allows the development of a linear deformation front in the foreland Jurassic-Cretaceous sequence, but an arcuate belt in the Palaeozoic carbonate sheets. Thus, the width and style of the outer arcuate belt also varies along the strike of the belt.
Results from the 2010 Feb 14 and July 4 Pluto Occultations
NASA Astrophysics Data System (ADS)
Young, Leslie; Sicardy, B.; Widemann, T.; Brucker, M. J.; Buie, M. W.; Fraser, B.; Van Heerden, H.; Howell, R. R.; Lonergan, K.; Olkin, C. B.; Reitsema, H. J.; Richter, A.; Sepersky, T.; Wasserman, L. H.; Young, E. F.
2010-10-01
The Portable High-speed Occultation Telescope (PHOT) group observed two occultations by Pluto in 2010. The first, of a I=9.3 magnitudue star on 2010 Feb 14, was organized by the Meudon occultation group, with the PHOT group as collaborators. For this bright but low-elevation event, we deployed to three sites in Europe: Obs. Haute Provence, France (0.8-m; L. Young, H. Reitsema), Leopold Figl, Austria (1.5-m; E. Young), and Apline Astrovillage, Lu, Switzerland (0.36-m; C. Olkin, L. Wasserman). We obtained a lightcurve at Lu under clear conditions, which will be combined with two other lightcurves from the Meudon group, from Sisteron and Pic du Midi, France. We observed the second Pluto occultation, of a I=13.2 star on 2010 July 4 UT, from four sites in South Africa: with our portable telescope near Upington (0.36-m; M. Buie, L. Wasserman), the Boyden telescope in Bloemfontein (1.5-m; L. Young, M. Brucker), the Innes telescope in Johannesburg (0.67-m; T. Sepersky, B. Fraser), and the telescope at Aloe Ridge north of Johannesburg (0.62-m; R. Howell, K. Lonergan, A. Richter). Upington was cloudy, Boyden had heavy scattered clouds, and Innes suffered from haze and telescope mechanical problems. A lightcurve was obtained from Aloe Ridge under clear conditions. Data was also obtained by Karl-Ludwig Bath & Thomas Sauer at Hakos, Namibia and by Berto Monard of ASSA near Pretoria, South Africa. The length of the Aloe Ridge chord suggests it is nearly central. These observations give us four contiguous years in which we observed one or more Pluto occultations, providing constraints on the seasonal evolution of Pluto's atmosphere. Thanks are due to Marcelo Assafin and Jim Elliot for sharing predictions prior to the July event. This work was supported, in part, by NASA PAST NNX08A062G.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley-Geer, E. J.; Lin, H.; Drabek, E. R.
2011-11-20
We report on the serendipitous discovery in the Blanco Cosmology Survey (BCS) imaging data of a z = 0.9057 galaxy that is being strongly lensed by a massive galaxy cluster at a redshift of z = 0.3838. The lens (BCS J2352-5452) was discovered while examining i- and z-band images being acquired in 2006 October during a BCS observing run. Follow-up spectroscopic observations with the Gemini Multi-Object Spectrograph instrument on the Gemini-South 8 m telescope confirmed the lensing nature of this system. Using weak-plus-strong lensing, velocity dispersion, cluster richness N{sub 200}, and fitting to a Navarro-Frenk-White (NFW) cluster mass density profile,more » we have made three independent estimates of the mass M{sub 200} which are all very consistent with each other. The combination of the results from the three methods gives M{sub 200} = (5.1 {+-} 1.3) Multiplication-Sign 10{sup 14} M{sub Sun }, which is fully consistent with the individual measurements. The final NFW concentration c{sub 200} from the combined fit is c{sub 200} = 5.4{sup +1.4}{sub -1.1}. We have compared our measurements of M{sub 200} and c{sub 200} with predictions for (1) clusters from {Lambda}CDM simulations, (2) lensing-selected clusters from simulations, and (3) a real sample of cluster lenses. We find that we are most compatible with the predictions for {Lambda}CDM simulations for lensing clusters, and we see no evidence based on this one system for an increased concentration compared to {Lambda}CDM. Finally, using the flux measured from the [O II]3727 line we have determined the star formation rate of the source galaxy and find it to be rather modest given the assumed lens magnification.« less
New insights on multiplicity and clustering in Taurus.
NASA Astrophysics Data System (ADS)
Joncour, Isabelle; Duchene, Gaspard; Moraux, Estelle; Mundy, Lee
2018-01-01
Multiplicity and clustering of young stars are critical clues to constraint star formation process. The Taurus molecular complex is the archetype of a quiescent star forming region that may retain primeval signature of star formation.Using statistical and clustering tools such as nearest neighbor statistics, correlation functions and the density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, this work reveals new spatial substructures in Taurus.We have identified unexpected ultra wide pairs (UWPs) candidates of high order multiplicity in Taurus in the 5-60 kAU separation range (Joncour et al 2017), beyond the separation assessed for wide pairs (Kraus & Hillenbrand 2009).Our work reveals 20 local stellar substructures, the Nested Elementary Structures (NESTs). These NESTs contain nearly half the stars of Taurus and 75% of the Class 0/I objects probing that they are the preferred sites of star formation (Joncour et al, sub.). The NESTs size ranges from few kAU up to 80 kAU making a length scale bridge between wide pairs and loose group (few hundreds kAU, Kirk & Myers, 2011). The NESTs mass ranges from 0.5-10 solar mass. The balance between Class I, II and III in NESTs suggests that they may be ordered as an evolutionary temporal scheme, some of them got infertile, while other shelter stars in infancy.The UWPs and the NESTs may be pristine imprints of their spatial configuration at birth. The UWPs population may result from a cascade fragmentation scenario of the natal molecular core. They could be the older counterparts, to the 0.5 Myr prestellar cores/Class 0 multiple objects observed at radio/millimeter wavelengths (Tobin et al 2010, 2016) and the precursors of the large number of UWPs (10–100 kAU) recently identified in older moving groups (Floriano-Alonso et al, 2015 ; Elliot et al 2016). The NESTs may result from the gravitational collapse of a gas clump that fragments to give a tight collection of stars within few millions years.This project has been partly supported by the StarFormMapper project funded by the European Union's Horizon 2020 Research and Innovation Action (RIA) program under grant agreement number 687528.
de Lusignan, Simon; Correa, Ana; Pebody, Richard; Yonova, Ivelina; Smith, Gillian; Byford, Rachel; Pathirannehelage, Sameera Rankiri; McGee, Christopher; Elliot, Alex J; Hriskova, Mariya; Ferreira, Filipa Im; Rafi, Imran; Jones, Simon
2018-04-30
The Royal College of General Practitioners Research and Surveillance Centre comprises more than 150 general practices, with a combined population of more than 1.5 million, contributing to UK and European public health surveillance and research. The aim of this paper was to report gender differences in the presentation of infectious and respiratory conditions in children and young adults. Disease incidence data were used to test the hypothesis that boys up to puberty present more with lower respiratory tract infection (LRTI) and asthma. Incidence rates were reported for infectious conditions in children and young adults by gender. We controlled for ethnicity, deprivation, and consultation rates. We report odds ratios (OR) with 95% CI, P values, and probability of presenting. Boys presented more with LRTI, largely due to acute bronchitis. The OR of males consulting was greater across the youngest 3 age bands (OR 1.59, 95% CI 1.35-1.87; OR 1.13, 95% CI 1.05-1.21; OR 1.20, 95% CI 1.09-1.32). Allergic rhinitis and asthma had a higher OR of presenting in boys aged 5 to 14 years (OR 1.52, 95% CI 1.37-1.68; OR 1.31, 95% CI 1.17-1.48). Upper respiratory tract infection (URTI) and urinary tract infection (UTI) had lower odds of presenting in boys, especially those older than 15 years. The probability of presenting showed different patterns for LRTI, URTI, and atopic conditions. Boys younger than 15 years have greater odds of presenting with LRTI and atopic conditions, whereas girls may present more with URTI and UTI. These differences may provide insights into disease mechanisms and for health service planning. ©Simon de Lusignan, Ana Correa, Richard Pebody, Ivelina Yonova, Gillian Smith, Rachel Byford, Sameera Rankiri Pathirannehelage, Christopher McGee, Alex J. Elliot, Mariya Hriskova, Filipa IM Ferreira, Imran Rafi, Simon Jones. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 30.04.2018.
Population trends of Alaskan seabirds
Hatch, Scott A.
1993-01-01
Ornithology in Alaska formally began with the observations of Georg Wilhelm Steller during Vitus Bering's voyage of discovery in 1741. Steller's journal makes brief mention of various seabird species he encountered during his travels in the Gulf of Alaska and Aleutian Islands (Frost and Engel 1988). For more than 100 years following Steller, the Russian-American Company was active in commercial fur harvesting throughout southern coastal Alaska, but this period saw little contribution to a scientific understanding of the region's avifauna. With the purchase of Alaska by the United States in 1867, a period of American exploration began that included significant work by pioneering naturalists such as Dall (1873, 1874), Elliot (1881), Nelson (1883), and Turner (1885, 1886). While this activity established a comprehensive list and general knowledge of the distribution of seabird species occurring in Alaska, early observers provided no quantitative estimates of abundance for any colony or region.The observations of Heath (1915) and Willett (1912, 1915, 1917) at two locations in southeastern Alaska are notable for including the first numerical estimates of any seabird populations for comparison with recent data. Willett's (1912, 1915) estimates of 13 species are given in Table 1 with results from a 1976 Survey at Forrester Island (DeGange et al. 1977) and a 1981 survey at St. Lazaria Island (Nelson et al 1982). In the aggregate, seabird numbers appeared to increase dramatically at both sites, but the differences may be largely artificial. Because Willett (1915) did not employ rigorous sampling methods, DeGange et al. (1977) surmised that he grossly underestimated the populations of burrowing species such as storm-petrels, Cassin's Auklets. and Rhinoceros Aulkets. Nelson et al. (1982) offered a similar interpretation of total storm-petrel numbers at St. Lazaria, but felt that a shift in the species ratio of Leach's and Fork-tailed Storm-Petrels had likely occurred. It seems reasonably certain that real changes in some of the open-nesters like Common Murres (down at Forrester Island, up at St. Lazaria) and Glaucous-winged and Herring Gulls (absent or down at both sites) have occurred since Willet's time.
Di Paolo, Marco; Bugelli, Valentina; Di Luca, Alessandro; Turillazzi, Emanuela
2014-11-20
Irrigation or washouts of the bladder are usually performed in various clinical settings. In the 1980s Elliot and colleagues argued that urothelial damage could occur after washouts and irrigations of the bladder. The exact mechanism underlying urothelial damage has not yet been discovered. To our knowledge, this is the first report of fatal fluid overload and pulmonary edema, due to urothelium disruption occurring during bladder irrigation, approached performing complete histological and immunohistochemical investigation on bladder specimens. The exposed case deserves attention since it demonstrates that, although very rarely, irrigation or washouts of the bladder may have unexpected serious clinical consequences. An 85 year-old Caucasian man, unable to eat independently and whose fluid intake was controlled, underwent continuous bladder irrigation with a 3-way catheter due to a severe episode of macrohematuria. During the third day of hospitalization, while still undergoing bladder irrigation, he suddenly experienced extreme shortness of breath, breathing difficulties, and cough with frothy sputum. His attending nurse immediately noted that there was no return of the fluid (5 liters) introduced through bladder irrigation. He was treated urgently with hemodialysis. At the beginning of the dialysis treatment, the patient had gained 7.4 kg since the previous measurement (24 hours prior) without any clear explanation. Although a significant weight loss (from 81 to 76 kg) due to the dialysis procedure, the patient died shortly after the final treatment. The autopsy revealed that the brain and the lungs were heavily edematous. Microscopic examination of bladder specimens revealed interstitial and mucosal swelling, and loss of the superficial cell layer. Intermediate and basal urothelial cells were preserved. Altogether the above mentioned findings were suggestive of a diffuse disruption of the urothelium. In conclusion the death of the man was attributed to an acute severe pulmonary edema due to massive fluid absorption. Our case demonstrates that urothelium disruption may occur during irrigation and washouts of the bladder, also in the absence of other well-known predisposing conditions. Inappropriate use of bladder irrigation should be avoided and a close attention is required of the fluid balance is mandatory when irrigating the bladder.
Design of a vehicle based system to prevent ozone loss
NASA Technical Reports Server (NTRS)
Lynn, Sean R.; Bunker, Deborah; Hesbach, Thomas D., Jr.; Howerton, Everett B.; Hreinsson, G.; Mistr, E. Kirk; Palmer, Matthew E.; Rogers, Claiborne; Tischler, Dayna S.; Wrona, Daniel J.
1993-01-01
Reduced quantities of ozone in the atmosphere allow greater levels of ultraviolet light (UV) radiation to reach the earth's surface. This is known to cause skin cancer and mutations. Chlorine liberated from Chlorofluorocarbons (CFC's) and natural sources initiate the destruction of stratospheric ozone through a free radical chain reaction. The project goals are to understand the processes which contribute to stratospheric ozone loss, examine ways to prevent ozone loss, and design a vehicle-based system to carry out the prevention scheme. The 1992/1993 design objectives were to accomplish the first two goals and define the requirements for an implementation vehicle to be designed in detail starting next year. Many different ozone intervention schemes have been proposed though few have been researched and none have been tested. A scheme proposed by R.J. Cicerone, Scott Elliot and R.P.Turco late in 1991 was selected because of its research support and economic feasibility. This scheme uses hydrocarbon injected into the Antarctic ozone hole to form stable compounds with free chlorine, thus reducing ozone depletion. Because most polar ozone depletion takes place during a 3-4 week period each year, the hydrocarbon must be injected during this time window. A study of the hydrocarbon injection requirements determined that 100 aircraft traveling Mach 2.4 at a maximum altitude of 66,000 ft. would provide the most economic approach to preventing ozone loss. Each aircraft would require an 8,000 nm. range and be able to carry 35,000 lbs. of propane. The propane would be stored in a three-tank high pressure system. Missions would be based from airport regions located in South America and Australia. To best provide the requirements of mission analysis, an aircraft with L/D(sub cruise) = 10.5, SFC = 0.65 (the faculty advisor suggested that this number is too low) and a 250,000 lb TOGW was selected as a baseline. Modularity and multi-role functionality were selected to be key design features. Modularity provides ease of turnaround for the down-time critical mission. Multi-role functionality allows the aircraft to be used beyond its design mission, perhaps as an High Speed Civil Transport (HSCT) or for high altitude research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Steven A.; Lipinski, Ronald J.; Pandya, Tara
2005-02-06
Heat Pipe Reactors (HPR) for space power conversion systems offer a number of advantages not easily provided by other systems. They require no pumping, their design easily deals with freezing and thawing of the liquid metal, and they can provide substantial levels of redundancy. Nevertheless, no reactor has ever been operated and cooled with heat pipes, and the startup and other operational characteristics of these systems remain largely unknown. Signification deviations from normal reactor heat removal mechanisms exist, because the heat pipes have fundamental heat removal limits due to sonic flow issues at low temperatures. This paper proposes an earlymore » prototypic test of a Heat Pipe Reactor (using existing 20% enriched nuclear fuel pins) to determine the operational characteristics of the HPR. The proposed design is similar in design to the HOMER and SAFE-300 HPR designs (Elliot, Lipinski, and Poston, 2003; Houts, et. al, 2003). However, this reactor uses existing UZrH fuel pins that are coupled to potassium heat pipes modules. The prototype reactor would be located in the Sandia Annular Core Research Reactor Facility where the fuel pins currently reside. The proposed reactor would use the heat pipes to transport the heat from the UZrH fuel pins to a water pool above the core, and the heat transport to the water pool would be controlled by adjusting the pressure and gas type within a small annulus around each heat pipe. The reactor would operate as a self-critical assembly at power levels up to 200 kWth. Because the nuclear heated HPR test uses existing fuel and because it would be performed in an existing facility with the appropriate safety authorization basis, the test could be performed rapidly and inexpensively. This approach makes it possible to validate the operation of a HPR and also measure the feedback mechanisms for a typical HPR design. A test of this nature would be the world's first operating Heat Pipe Reactor. This reactor is therefore called 'HPR-1'.« less
Hot Electron Injection into Uniaxially Strained Silicon
NASA Astrophysics Data System (ADS)
Kim, Hyun Soo
In semiconductor spintronics, silicon attracts great attention due to the long electron spin lifetime. Silicon is also one of the most commonly used semiconductor in microelectronics industry. The spin relaxation process of diamond crystal structure such as silicon is dominant by Elliot-Yafet mechanism. Yafet shows that intravalley scattering process is dominant. The conduction electron spin lifetime measured by electron spin resonance measurement and electronic measurement using ballistic hot electron method well agrees with Yafet's theory. However, the recent theory predicts a strong contribution of intervalley scattering process such as f-process in silicon. The conduction band minimum is close the Brillouin zone edge, X point which causes strong spin mixing at the conduction band. A recent experiment of electric field-induced hot electron spin relaxation also shows the strong effect of f-process in silicon. In uniaxially strained silicon along crystal axis [100], the suppression of f-process is predicted which leads to enhance electron spin lifetime. By inducing a change in crystal structure due to uniaxial strain, the six fold degeneracy becomes two fold degeneracy, which is valley splitting. As the valley splitting increases, intervalley scattering is reduced. A recent theory predicts 4 times longer electron spin lifetime in 0.5% uniaxially strained silicon. In this thesis, we demonstrate ballistic hot electron injection into silicon under various uniaxial strain. Spin polarized hot electron injection under strain is experimentally one of the most challenging part to measure conduction electron spin lifetime in silicon. Hot electron injection adopts tunnel junction which is a thin oxide layer between two conducting materials. Tunnel barrier, which is an oxide layer, is only 4 ˜ 5 nm thick. Also, two conducting materials are only tens of nanometer. Therefore, under high pressure to apply 0.5% strain on silicon, thin films on silicon substrate can be easily destroyed. In order to confirm the performance of tunnel junction, we use tunnel magnetoresistance(TMR). TMR consists of two kinds of ferromagnetic materials and an oxide layer as tunnel barrier in order to measure spin valve effect. Using silicon as a collector with Schottky barrier interface between metal and silicon, ballistic hot spin polarized electron injection into silicon is demonstrated. We also observed change of coercive field and magnetoresistance due to modification of local states in ferromagnetic materials and surface states at the interface between metal and silicon due to strain.
Ice Mass Fluctuations and Earthquake Hazard
NASA Technical Reports Server (NTRS)
Sauber, J.
2006-01-01
In south central Alaska, tectonic strain rates are high in a region that includes large glaciers undergoing ice wastage over the last 100-150 years [Sauber et al., 2000; Sauber and Molnia, 2004]. In this study we focus on the region referred to as the Yakataga segment of the Pacific-North American plate boundary zone in Alaska. In this region, the Bering and Malaspina glacier ablation zones have average ice elevation decreases from 1-3 meters/year (see summary and references in Molnia, 2005). The elastic response of the solid Earth to this ice mass decrease alone would cause several mm/yr of horizontal motion and uplift rates of up to 10-12 mm/yr. In this same region observed horizontal rates of tectonic deformation range from 10 to 40 mm/yr to the north-northwest and the predicted tectonic uplift rates range from -2 mm/year near the Gulf of Alaska coast to 12mm/year further inland [Savage and Lisowski, 1988; Ma et al, 1990; Sauber et al., 1997, 2000, 2004; Elliot et al., 2005]. The large ice mass changes associated with glacial wastage and surges perturb the tectonic rate of deformation at a variety of temporal and spatial scales. The associated incremental stress change may enhance or inhibit earthquake occurrence. We report recent (seasonal to decadal) ice elevation changes derived from data from NASA's ICESat satellite laser altimeter combined with earlier DEM's as a reference surface to illustrate the characteristics of short-term ice elevation changes [Sauber et al., 2005, Muskett et al., 2005]. Since we are interested in evaluating the effect of ice changes on faulting potential, we calculated the predicted surface displacement changes and incremental stresses over a specified time interval and calculated the change in the fault stability margin using the approach given by Wu and Hasegawa [1996]. Additionally, we explored the possibility that these ice mass fluctuations altered the seismic rate of background seismicity. Although we primarily focus on evaluating the influence of ice mass changes since the end of the little Ice Age, the study is partially motivated by paleoseismic evidence from Yakataga and Kodiak regions which suggests that earlier glacier retreat may be associated with large earthquakes [Sauber et al., 2000; Carver et al., 2003].
Todkill, Dan; Loveridge, Paul; Elliot, Alex J; Morbey, Roger A; Edeghere, Obaghe; Rayment-Bishop, Tracy; Rayment-Bishop, Chris; Thornes, John E; Smith, Gillian
2017-12-01
Introduction The Public Health England (PHE; United Kingdom) Real-Time Syndromic Surveillance Team (ReSST) currently operates four national syndromic surveillance systems, including an emergency department system. A system based on ambulance data might provide an additional measure of the "severe" end of the clinical disease spectrum. This report describes the findings and lessons learned from the development and preliminary assessment of a pilot syndromic surveillance system using ambulance data from the West Midlands (WM) region in England. Hypothesis/Problem Is an Ambulance Data Syndromic Surveillance System (ADSSS) feasible and of utility in enhancing the existing suite of PHE syndromic surveillance systems? An ADSSS was designed, implemented, and a pilot conducted from September 1, 2015 through March 1, 2016. Surveillance cases were defined as calls to the West Midlands Ambulance Service (WMAS) regarding patients who were assigned any of 11 specified chief presenting complaints (CPCs) during the pilot period. The WMAS collected anonymized data on cases and transferred the dataset daily to ReSST, which contained anonymized information on patients' demographics, partial postcode of patients' location, and CPC. The 11 CPCs covered a broad range of syndromes. The dataset was analyzed descriptively each week to determine trends and key epidemiological characteristics of patients, and an automated statistical algorithm was employed daily to detect higher than expected number of calls. A preliminary assessment was undertaken to assess the feasibility, utility (including quality of key indicators), and timeliness of the system for syndromic surveillance purposes. Lessons learned and challenges were identified and recorded during the design and implementation of the system. The pilot ADSSS collected 207,331 records of individual ambulance calls (daily mean=1,133; range=923-1,350). The ADSSS was found to be timely in detecting seasonal changes in patterns of respiratory infections and increases in case numbers during seasonal events. Further validation is necessary; however, the findings from the assessment of the pilot ADSSS suggest that selected, but not all, ambulance indicators appear to have some utility for syndromic surveillance purposes in England. There are certain challenges that need to be addressed when designing and implementing similar systems. Todkill D , Loveridge P , Elliot AJ , Morbey RA , Edeghere O , Rayment-Bishop T , Rayment-Bishop C , Thornes JE , Smith G . Utility of ambulance data for real-time syndromic surveillance: a pilot in the West Midlands region, United Kingdom. Prehosp Disaster Med. 2017;32(6):667-672.
Naidu, Avantika; Brown, David; Roth, Elliot
2018-05-03
Body weight support treadmill training protocols in conjunction with other modalities are commonly used to improve poststroke balance and walking function. However, typical body weight support paradigms tend to use consistently stable balance conditions, often with handrail support and or manual assistance. In this paper, we describe our study protocol, which involved 2 unique body weight support treadmill training paradigms of similar training intensity that integrated dynamic balance challenges to help improve ambulatory function post stroke. The first paradigm emphasized walking without any handrails or manual assistance, that is, hands-free walking, and served as the control group, whereas the second paradigm incorporated practicing 9 essential challenging mobility skills, akin to environmental barriers encountered during community ambulation along with hands-free walking (ie hands-free + challenge walking). We recruited individuals with chronic poststroke hemiparesis and randomized them to either group. Participants trained for 6 weeks on a self-driven, robotic treadmill interface that provided body weight support and a safe gait-training environment. We assessed participants at pre-, mid- and post 6 weeks of intervention-training, with a 6-month follow-up. We hypothesized greater walking improvements in the hands-free + challenge walking group following training because of increased practice opportunity of essential mobility skills along with hands-free walking. We assessed 77 individuals with chronic hemiparesis, and enrolled and randomized 30 individuals poststroke for our study (hands-free group=19 and hands-free + challenge walking group=20) from June 2012 to January 2015. Data collection along with 6-month follow-up continued until January 2016. Our primary outcome measure is change in comfortable walking speed from pre to post intervention for each group. We will also assess feasibility, adherence, postintervention efficacy, and changes in various exploratory secondary outcome measures. Additionally, we will also assess participant responses to a study survey, conducted at the end of training week, to gauge each group's training experiences. Our treadmill training paradigms, and study protocol represent advances in standardized approaches to selecting body weight support levels without the necessity for using handrails or manual assistance, while progressively providing dynamic challenges for improving poststroke ambulatory function during rehabilitation. ClinicalTrials.gov NCT02787759; https://clinicaltrials.gov/ct2/show/NCT02787759 (Archived by Webcite at http://www.webcitation.org/6yJZCrIea). ©Avantika Naidu, David Brown, Elliot Roth. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 03.05.2018.
NASA Astrophysics Data System (ADS)
Bigu, J.; Elliott, J.
1994-05-01
An instrument has been developed for continuous monitoring of 220Rn. The method of data analysis is based on delayed coincidences between 220Rn and 216Po. The instrument basically consists of a scaler equipped with a photomultiplier tube (PMT) to which a scintillation cell (SC) of the flow through type is optically coupled. The scaler is equipped with a pulse output (P/O) port which provides a TTL pulse, +5 V in amplitude and 5 to 10 μs duration for each nuclear event recorded by the SC and its associated electronic circuitry. The P/O port is connected to a 32 bit counter/timer unit operating at 1 MHz which records and stores the time of arrival of pulses. For laboratory use, the counter/timer is connected to the serial port of a laptop PC. However, for field applications, where space and weight pose severe practical limitations, the PC is substituted by an expanded counter/timer unit which incorporates a muprocessor for data analysis, a LCD for data display, and a keypad to key in function instructions. Furthermore, some additional hardware permits the measurement of 220Rn flux density, J( 220Rn) , from soils and other materials. Because total α-particle count, as well as delayed (α - α) coincidence rates are recorded in two separate channels, the method permits the measurement of 222Rn in addition to 220Rn. The method is particularly useful for low concentration levels. The sensitivity of the method primarily depends on the volume of the SC. For a low volume SC (˜0.16 l), a sensitivity of 0.2 h -1/Bq m -3 for 220Rn and 1.4 h -1/Bq m -3 for 222Rn are readily attainable. For a large volume (1.5 l) SC (external PMT used), the sensitivity for 220Rn is ≥ 1.5 h -1/Bq m -3, depending on the SC design and the operating sampling flowrate. (Note: h -1 stands for counts per hour). The above instrument has been used extensively at the National Radon/Thoron Test Facility (NRTTF) of the Elliot Lake Laboratory for routine monitoring of 220Rn levels since 1992. It has also been used for outdoor and indoor 220Rn measurements, as well as for the determination of J( 220Rn) from earthen materials and the like.
NASA Astrophysics Data System (ADS)
Bourdon, B. P.; Turner, S. P.
2001-12-01
In this study, we have analyzed U-series in lavas from the Azores islands and the nearby Mid-Atlantic Ridge (FAZAR cruise) in an attempt to assess the relative importance of melting processes versus source variations in the context of ridge-hotpsot interaction. The lavas were analyzed for 238U-230Th (Turner et al. 1997, Bourdon et al. 1996) 226Ra-230Th and 235U-231Pa disequilibria by thermal ionisation mass spectrometry. Our results for the historic lavas from the Azores islands show that the 231Pa excess are at the low end of the trend found for other OIB (Pickett et al. 1997 and Bourdon et al. 1998) and fall on a positive correlation in a 231Pa/235U versus 230Th/238U diagram. In contrast, lavas from the nearby Mid-Atlantic ridge are characterized by larger (231Pa/235U) activity ratios for similar and greater (230Th/238U) ratios. There is also a weak correlation between 226Ra/230Th and 231Pa/235U. These data do not indicate a simple mixing trend between an N-MORB and an enriched component in the 231Pa/235U versus 230Th/238U diagram since the MORBs which do not have the most radiogenic isotope signatures compared with the Azores island basalts have some of the largest (230Th/238U) and 231Pa/235U. Clearly, the dynamics of melting must have played a role in generating larger 230Th and 231Pa excesses beneath the Mid-Atlantic ridge. We infer that this must be due to the absence of a lithospheric lid as larger excesses of 230Th and 231Pa can be generated for longer melting columns. Thus, ridge-hotspot interaction cannot imply a simple transfer of melt from the hotspot to the ridge. The 230Th/238U and 226Ra/230Th data across the Azores plateau shows a maximum for the island of Terceira and mimics the depth anomaly which is thought to result from the hotspot. This trend is also consistent with observations of rare gases (M. Moreira pers. comm.) and suggests that it must be related to the presence of deep material. The U-series trend is the reverse of the trend found in Hawaii by Sims et al. (2000) which was attributed to variations in upwelling rates across the rising plume. This observation can be rationalized in the context of an equilibrium melt transport model (Spiegelman and Elliott, 1993) where U-series disequilibria are sensitive to upwelling rates. For slow upwelling rates such as below the Azores, larger 230Th excesses are predicted in the center of the plume. This suggests that the upwelling rate beneath the center of the plume must be of the order of a few cm per year which is an order of magnitude lower than values estimated for Hawaii. Turner et al. 1997, Chem. Geol. 139, 145-164. Bourdon et al. 1996, Earth Planet. Sci. Lett. 142, 175-189. Pickett et al. 1997, Earth Planet. Sci. Lett. 148, 259-271. Sims et al. 1999, Geochim. Cosmochim. Acta. 63, 4119-4138. Spiegelman and Elliot, 1993, Earth Planet. Sci. Lett., 118, 1-20.
NASA Astrophysics Data System (ADS)
Reineman, Benjamin D.
I present the development of instrumentation and methods for the measurement of coastal processes, ocean surface phenomena, and air-sea interaction in two parts. In the first, I discuss the development of a portable scanning lidar (light detection and ranging) system for manned aircraft and demonstrate its functionality for oceanographic and coastal measurements. Measurements of the Southern California coastline and nearshore surface wave fields from seventeen research flights between August 2007 and December 2008 are analyzed and discussed. The October 2007 landslide on Mt. Soledad in La Jolla, California was documented by two of the flights. The topography, lagoon, reef, and surrounding wave field of Lady Elliot Island in Australia's Great Barrier Reef were measured with the airborne scanning lidar system on eight research flights in April 2008. Applications of the system, including coastal topographic surveys, wave measurements, ship wake studies, and coral reef research, are presented and discussed. In the second part, I detail the development of instrumentation packages for small (18 -- 28 kg) unmanned aerial vehicles (UAVs) to measure momentum fluxes and latent, sensible, and radiative heat fluxes in the atmospheric boundary layer (ABL), and the surface topography. Fast-response turbulence, hygrometer, and temperature probes permit turbulent momentum and heat flux measurements, and short- and long-wave radiometers allow the determination of net radiation, surface temperature, and albedo. Careful design and testing of an accurate turbulence probe, as demonstrated in this thesis, are essential for the ability to measure momentum and scalar fluxes. The low altitude required for accurate flux measurements (typically assumed to be 30 m) is below the typical safety limit of manned research aircraft; however, it is now within the capability of small UAV platforms. Flight tests of two instrumented BAE Manta UAVs over land were conducted in January 2011 at McMillan Airfield (Camp Roberts, CA), and flight tests of similarly instrumented Boeing-Insitu ScanEagle UAVs were conducted in April 2012 at the Naval Surface Warfare Center, Dahlgren Division (Dahlgren, VA), where the first known direct flux measurements were made from low-altitude (down to 30 m) UAV flights over water (Potomac River). During the October 2012 Equatorial Mixing Experiment in the central Pacific aboard the R/V Roger Revelle, ship-launched and recovered ScanEagles were deployed in an effort to characterize the marine atmospheric boundary layer structure and dynamics. I present a description of the instrumentation, summarize results from flight tests, present preliminary analysis from UAV flights off of the Revelle, and discuss potential applications of these UAVs for marine atmospheric boundary layer studies.
Simulation of wind wave growth with reference source functions
NASA Astrophysics Data System (ADS)
Badulin, Sergei I.; Zakharov, Vladimir E.; Pushkarev, Andrei N.
2013-04-01
We present results of extensive simulations of wind wave growth with the so-called reference source function in the right-hand side of the Hasselmann equation written as follows First, we use Webb's algorithm [8] for calculating the exact nonlinear transfer function Snl. Second, we consider a family of wind input functions in accordance with recent consideration [9] ( )s S = ?(k)N , ?(k) = ? ? ?- f (?). in k 0 ?0 in (2) Function fin(?) describes dependence on angle ?. Parameters in (2) are tunable and determine magnitude (parameters ?0, ?0) and wave growth rate s [9]. Exponent s plays a key role in this study being responsible for reference scenarios of wave growth: s = 4-3 gives linear growth of wave momentum, s = 2 - linear growth of wave energy and s = 8-3 - constant rate of wave action growth. Note, the values are close to ones of conventional parameterizations of wave growth rates (e.g. s = 1 for [7] and s = 2 for [5]). Dissipation function Sdiss is chosen as one providing the Phillips spectrum E(?) ~ ?5 at high frequency range [3] (parameter ?diss fixes a dissipation scale of wind waves) Sdiss = Cdissμ4w?N (k)θ(? - ?diss) (3) Here frequency-dependent wave steepness μ2w = E(?,?)?5-g2 makes this function to be heavily nonlinear and provides a remarkable property of stationary solutions at high frequencies: the dissipation coefficient Cdiss should keep certain value to provide the observed power-law tails close to the Phillips spectrum E(?) ~ ?-5. Our recent estimates [3] give Cdiss ? 2.0. The Hasselmann equation (1) with the new functions Sin, Sdiss (2,3) has a family of self-similar solutions of the same form as previously studied models [1,3,9] and proposes a solid basis for further theoretical and numerical study of wave evolution under action of all the physical mechanisms: wind input, wave dissipation and nonlinear transfer. Simulations of duration- and fetch-limited wind wave growth have been carried out within the above model setup to check its conformity with theoretical predictions, previous simulations [2,6,9], experimental parameterizations of wave spectra [1,4] and to specify tunable parameters of terms (2,3). These simulations showed realistic spatio-temporal scales of wave evolution and spectral shaping close to conventional parameterizations [e.g. 4]. An additional important feature of the numerical solutions is a saturation of frequency-dependent wave steepness μw in short-frequency range. The work was supported by the Russian government contract No.11.934.31.0035, Russian Foundation for Basic Research grant 11-05-01114-a and ONR grant N00014-10-1-0991. References [1] S. I. Badulin, A. V. Babanin, D. Resio, and V. Zakharov. Weakly turbulent laws of wind-wave growth. J. Fluid Mech., 591:339-378, 2007. [2] S. I. Badulin, A. N. Pushkarev, D. Resio, and V. E. Zakharov. Self-similarity of wind-driven seas. Nonl. Proc. Geophys., 12:891-946, 2005. [3] S. I. Badulin and V. E. Zakharov. New dissipation function for weakly turbulent wind-driven seas. ArXiv e-prints, (1212.0963), December 2012. [4] M. A. Donelan, J. Hamilton, and W. H. Hui. Directional spectra of wind-generated waves. Phil. Trans. Roy. Soc. Lond. A, 315:509-562, 1985. [5] M. A. Donelan and W. J. Pierson-jr. Radar scattering and equilibrium ranges in wind-generated waves with application to scatterometry. J. Geophys. Res., 92(C5):4971-5029, 1987. [6] E. Gagnaire-Renou, M. Benoit, and S. I. Badulin. On weakly turbulent scaling of wind sea in simulations of fetch-limited growth. J. Fluid Mech., 669:178-213, 2011. [7] R. L. Snyder, F. W. Dobson, J. A. Elliot, and R. B. Long. Array measurements of atmospheric pressure fluctuations above surface gravity waves. J. Fluid Mech., 102:1-59, 1981. [8] D. J. Webb. Non-linear transfers between sea waves. Deep Sea Res., 25:279-298, 1978. [9] V. E. Zakharov, D. Resio, and A. N. Pushkarev. New wind input term consistent with experimental, theoretical and numerical considerations. ArXiv e-prints, (1212.1069), December 2012.
Uranium provinces of North America; their definition, distribution, and models
Finch, Warren Irvin
1996-01-01
Uranium resources in North America are principally in unconformity-related, quartz-pebble conglomerate, sandstone, volcanic, and phosphorite types of uranium deposits. Most are concentrated in separate, well-defined metallogenic provinces. Proterozoic quartz-pebble conglomerate and unconformity-related deposits are, respectively, in the Blind River–Elliot Lake (BRELUP) and the Athabasca Basin (ABUP) Uranium Provinces in Canada. Sandstone uranium deposits are of two principal subtypes, tabular and roll-front. Tabular sandstone uranium deposits are mainly in upper Paleozoic and Mesozoic rocks in the Colorado Plateau Uranium Province (CPUP). Roll-front sandstone uranium deposits are in Tertiary rocks of the Rocky Mountain and Intermontane Basins Uranium Province (RMIBUP), and in a narrow belt of Tertiary rocks that form the Gulf Coastal Uranium Province (GCUP) in south Texas and adjacent Mexico. Volcanic uranium deposits are concentrated in the Basin and Range Uranium Province (BRUP) stretching from the McDermitt caldera at the Oregon-Nevada border through the Marysvale district of Utah and Date Creek Basin in Arizona and south into the Sierra de Peña Blanca District, Chihuahua, Mexico. Uraniferous phosphorite occurs in Tertiary sediments in Florida, Georgia, and North and South Carolina and in the Lower Permian Phosphoria Formation in Idaho and adjacent States, but only in Florida has economic recovery been successful. The Florida Phosphorite Uranium Province (FPUP) has yielded large quantities of uranium as a byproduct of the production of phosphoric acid fertilizer. Economically recoverable quantities of copper, gold, molybdenum, nickel, silver, thorium, and vanadium occur with the uranium deposits in some provinces.Many major epochs of uranium mineralization occurred in North America. In the BRELUP, uranium minerals were concentrated in placers during the Early Proterozoic (2,500–2,250 Ma). In the ABUP, the unconformity-related deposits were most likely formed initially by hot saline formational water related to diagenesis (»1,400 to 1,330 Ma) and later reconcentrated by hydrothermal events at »1,280–»1,000, »575, and »225 Ma. Subsequently in North America, only minor uranium mineralization occurred until after continental collision in Permian time (255 Ma). Three principal epochs of uranium mineralization occurred in the CPUP: (1) » 210–200 Ma, shortly after Late Triassic sedimentation; (2) »155–150 Ma, in Late Jurassic time; and (3) » 135 Ma, after sedimentation of the Upper Jurassic Morrison Formation. The most likely source of the uranium was silicic volcaniclastics for the three epochs derived from a volcanic island arc at the west edge of the North American continent. Uranium mineralization occurred during Eocene, Miocene, and Pliocene times in the RMIBUP, GCUP, and BRUP. Volcanic activity took place near the west edge of the continent during and shortly after sedimentation of the host rocks in these three provinces. Some volcanic centers in the Sierra de Peña Blanca district within the BRUP may have provided uranium-rich ash to host rocks in the GCUP.Most of the uranium provinces in North America appear to have a common theme of close associations to volcanic activity related to the development of the western margin of the North American plate. The south and west margin of the Canadian Shield formed the leading edge of the progress of uranium source development and mineralization from the Proterozoic to the present. The development of favorable hosts and sources of uranium is related to various tectonic elements developed over time. Periods of major uranium mineralization in North America were Early Proterozoic, Middle Proterozoic, Late Triassic–Early Jurassic, Early Cretaceous, Oligocene, and Miocene. Tertiary mineralization was the most pervasive, covering most of Western and Southern North America.
Changes in redox properties of humic acids upon sorption to alumina
NASA Astrophysics Data System (ADS)
Subdiaga, Edisson; Orsetti, Silvia; Jindal, Sharmishta; Haderlein, Stefan B.
2016-04-01
1. Introduction A prominent role of Natural Organic Matter (NOM) in biogeochemical processes is its ability to act as an electron shuttle, accelerating rates between a bulk electron donor and an acceptor. The underlying processes are reversible redox reactions of quinone moieties.1 This shuttling effect has been studied in two major areas: transformation of redox active pollutants and microbial respiration.2-3 Previous studies primarily compared effects in the presence or absence of NOM without addressing the redox properties of NOM nor its speciation. The interaction between humic acids (HA) and minerals might change properties and reactivity of organic matter. Specifically, we investigate whether changes in the redox properties of a HA occur upon sorption to redox inactive minerals. Since fractionation and conformational rearrangements of NOM moieties upon sorption are likely to happen, the redox properties of the NOM fractions upon sorption might differ as well. 2. Materials and methods Elliot Soil Humic Acid (ESHA), Pahokee Peat Humic Acid (PPHA) and Suwannee River Humic Acid (SRHA) were used as received from IHSS. Aluminum oxide (Al2O3) was suspended in 0.1M KCl. Sorption was studied at pH 7.0 in duplicate batch experiments for several HA/Al2O3 ratios. For the suspension (mineral + sorbed HA, plus dissolved HA), the filtrate (0.45μm) and the HA stock solution, the electron donating and accepting capacities (EDC and EAC) were determined following established procedures.4 3. Results All studied HA-Al2O3 systems showed similar behavior with regard to changes in redox properties. There was a significant increase in the EDC of the whole suspension compared to the stock solutions and the non-sorbed HA in the filtrate (up to 300% for PPHA). This effect was more pronounced with increasing amounts of sorbed HA in the suspension. Although ESHA had the highest sorption capacity on Al2O3 (~ 6 times higher than PPHA & SRHA), it showed the smallest changes in redox properties upon sorption. Considering the total electron exchange capacities, significant changes were found mainly at higher amounts of sorbed PPHA and SRHA. 4. Conclusions Overall, our results suggest a change in the redox properties of sorbed HA but not for the dissolved fraction. The sorbed fraction showed a higher redox capacity than the stock samples. Given the absence of redox transfer between the HA and the redox inert aluminum oxide, such changes might be due to conformational changes in the humic substances. 5. References [1] Scott D., Mcknight, D., Blunt-Harris, E., Kolesar, S., Lovley, A. Environ. Sci. Technol. 1998, 32, 2984-2989. [2] Dunnivant, F. Schwarzenbach, R., Macalady, D. Environ. Sci. Technol. 1992, 26(11), 2133-2141. [3] Jiang, J. & Kappler, A. Environ. Sci. Technol. 2008, 42(10), 3562-3569. [4] Aeschbacher, M., Sander M., Schwarzenbach, R. Environ. Sci. Technol. 2010, 44(1), 87-93.
de Lusignan, Simon; Shinneman, Stacy; Yonova, Ivelina; van Vlymen, Jeremy; Elliot, Alex J; Bolton, Frederick; Smith, Gillian E; O'Brien, Sarah
2017-09-28
Infectious intestinal disease (IID) has considerable health impact; there are 2 billion cases worldwide resulting in 1 million deaths and 78.7 million disability-adjusted life years lost. Reported IID incidence rates vary and this is partly because terms such as "diarrheal disease" and "acute infectious gastroenteritis" are used interchangeably. Ontologies provide a method of transparently comparing case definitions and disease incidence rates. This study sought to show how differences in case definition in part account for variation in incidence estimates for IID and how an ontological approach provides greater transparency to IID case finding. We compared three IID case definitions: (1) Royal College of General Practitioners Research and Surveillance Centre (RCGP RSC) definition based on mapping to the Ninth International Classification of Disease (ICD-9), (2) newer ICD-10 definition, and (3) ontological case definition. We calculated incidence rates and examined the contribution of four supporting concepts related to IID: symptoms, investigations, process of care (eg, notification to public health authorities), and therapies. We created a formal ontology using ontology Web language. The ontological approach identified 5712 more cases of IID than the ICD-10 definition and 4482 more than the RCGP RSC definition from an initial cohort of 1,120,490. Weekly incidence using the ontological definition was 17.93/100,000 (95% CI 15.63-20.41), whereas for the ICD-10 definition the rate was 8.13/100,000 (95% CI 6.70-9.87), and for the RSC definition the rate was 10.24/100,000 (95% CI 8.55-12.12). Codes from the four supporting concepts were generally consistent across our three IID case definitions: 37.38% (3905/10,448) (95% CI 36.16-38.5) for the ontological definition, 38.33% (2287/5966) (95% CI 36.79-39.93) for the RSC definition, and 40.82% (1933/4736) (95% CI 39.03-42.66) for the ICD-10 definition. The proportion of laboratory results associated with a positive test result was 19.68% (546/2775). The standard RCGP RSC definition of IID, and its mapping to ICD-10, underestimates disease incidence. The ontological approach identified a larger proportion of new IID cases; the ontology divides contributory elements and enables transparency and comparison of rates. Results illustrate how improved diagnostic coding of IID combined with an ontological approach to case definition would provide a clearer picture of IID in the community, better inform GPs and public health services about circulating disease, and empower them to respond. We need to improve the Pathology Bounded Code List (PBCL) currently used by laboratories to electronically report results. Given advances in stool microbiology testing with a move to nonculture, PCR-based methods, the way microbiology results are reported and coded via PBCL needs to be reviewed and modernized. ©Simon de Lusignan, Stacy Shinneman, Ivelina Yonova, Jeremy van Vlymen, Alex J Elliot, Frederick Bolton, Gillian E Smith, Sarah O'Brien. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 28.09.2017.
Dutkiewicz, Jacek; Sroka, Jacek; Zając, Violetta; Wasiński, Bernard; Cisak, Ewa; Sawczyn, Anna; Kloc, Anna; Wójcik-Fatla, Angelina
2017-12-23
Streptococcus suis (ex Elliot 1966, Kilpper-Bälz & Schleifer 1987) is a facultatively anaerobic Gram-positive ovoid or coccal bacterium surrounded by a polysaccharide capsule. Based on the antigenic diversity of the capsule, S. suis strains are classified serologically into 35 serotypes. Streptococcus suis is a commensal of pigs, commonly colonizing their tonsils and nasal cavities, mostly in weaning piglets between 4-10 weeks of age. This species occurs also in cattle and other mammals, in birds and in humans. Some strains, mostly those belonging to serotype 2, are also pathogenic for pigs, as well as for other animals and humans. Meningitis is the primary disease syndrome caused by S. suis, both in pigs and in humans. It is estimated that meningitis accounted for 68.0% of all cases of human disease reported until the end of 2012, followed by septicaemia (including life-threatening condition described as 'streptococcal toxic shock-like syndrome' - STSLS), arthritis, endocarditis, and endophthalmitis. Hearing loss and/or ves tibular dysfunction are the most common sequelae after recovery from meningitis caused by S. suis, occurring in more than 50% of patients. In the last two decades, the number of reported human cases due to S. suis has dramatically increased, mostly due to epidemics recorded in China in 1998 and 2005, and the fulminant increase in morbidity in the countries of south-eastern Asia, mostly Vietnam and Thailand. Out of 1,642 cases of S. suis infections identified between 2002-2013 worldwide in humans, 90.2% occurred in Asia, 8.5% in Europe and 1.3% in other parts of the globe. The human disease has mostly a zoonotic and occupational origin and occurs in pig breeders, abattoir workers, butchers and workers of meat processing facilities, veterinarians and meat inspectors. Bacteria are transmitted to workers by close contact with pigs or pig products, usually through contamination of minor cuts or abrasions on skin of hands and/or arms, or by pig bite. A different epidemiologic situation occurs in the Southeast Asian countries where most people become infected by habitual consumption of raw or undercooked pork, blood and offal products in the form of traditional dishes. Prevention of S. suis infections in pigs includes vaccination, improvement in pig-raising conditions, disinfection and/or fumigation of animal houses, and isolation of sick animals at the outbreak of disease. Prevention of human infections comprises: protection of skin from pig bite or injury with sharp tools by people occupationally exposed to pigs and pig products, prompt disinfection and dressing of wounds and abrasions at work, protection of the respiratory tract by wearing appropriate masks or repirators, consulting a doctor in the case of febrile illness after exposure to pigs or pork meat, avoidance of occupations associated with exposure to pigs and pork by immunocompomised people, avoidance of consumption of raw pork or pig blood, adequate cooking of pork, and health education.
A review of stratigraphy and sedimentary environments of the Karoo Basin of South Africa
NASA Astrophysics Data System (ADS)
Smith, R. M. H.
The Karoo Supergroup covers almost two thirds of the present land surface of southern Africa. Its strata record an almost continuous sequence of continental sedimentation that began in the Permo-Carboniferous (280 Ma) and terminated in the early Jurassic 100 million years later. The glacio-marine to terrestrial sequence accumulated in a variety of tectonically controlled depositories under progressively more arid climatic conditions. Numerous vertebrate fossils are preserved in these rocks, including fish, amphibians, primitive aquatic reptiles, primitive land reptiles, more advanced mammal-like reptiles, dinosaurs and even the earliest mammals. Palaeoenvironmental analysis of the major stratigraphic units of the Karoo sequence demonstrates the effects of more localised tectonic basins in influencing depositional style. These are superimposed on a basinwide trend of progressive aridification attributed to the gradual northward migration of southwestern Gondwanaland out of polar climes and accentuated by the meteoric drying effect of the surrounding land masses. Combined with progressive climatic drying was a gradual shrinking of the basin brought about by the northward migration of the subducting palaeo-Pacific margin to the south. Following deposition of the Cape Supergroup in the pre-Karoo basin there was a period of uplift and erosion. At the same time the southern part of Gondwana migrated over the South Pole resulting in a major ice-sheet over the early Karoo basin and surrounding highlands. Glacial sedimentation in both upland valley and shelf depositories resulted in the basal Karoo Dwyka Formation. After glaciation, an extensive shallow sea remained over the gently subsiding shelf fed by large volumes of meltwater. Black clays and muds accumulated under relatively cool climatic conditions (Lower Ecca) with perhaps a warmer "interglacial" during which the distinctive Mesosaurus-bearing, carbonaceous shales of the Whitehill Formation were deposited. Deformation of the southern rim of the basin, caused by the subducting palaeo-Pacific plate, resulted in mountain ranges far to the south. Material derived from this source, as well as granitic uplands to the west and morth-east, was deposited on large deltas that built out into the Ecca sea (Upper Ecca). The relatively cool climate and lowland setting promoted thick accumulations of peat on the coastal and delta plains and which now constitute the major coal reserves of southern Africa. Later the prograding deltas coalesced to fill most of the basin after which fluvial sedimentation of the Beaufort Group dominated. The climate by this time (Late Permian) had warmed to become semi-arid with highly seasonal rainfall. The central parts of the basin were for the most part drained by fine-grained meanderbelts and semi-permanent lakes. Significant stratabound uranium reserves have been delimited in the channel sandstones of the Beaufort Group in the southwestern parts of the basin. Pulses of uplift in the southern source areas combined with a possible orogenic effect resulted in two coarser-grained alluvial fans prograding into the more central parts of the basin (Katberg Sandstone Member and Molteno Formation). In the upper Karoo sequence progressive aridification dominated depositional style with playa lake and wadi-type environments (Elliot Formation) that finally gave way to a dune sand dominated system (Clarens Formation). Basinwide volcanic activity of the early Jurassic Drakensberg Group brought deposition in the Karoo Basin to a close.
NASA Astrophysics Data System (ADS)
Smith, R. M. H.; Eriksson, P. G.; Botha, W. J.
1993-02-01
The Karoo Basin of South Africa was one of several contemporaneous intracratonic basins in southwestern Gondwana that became active in the Permo-Carboniferous (280 Ma) and continued to accumulate sediments until the earliest Jurassic, 100 million years later. At their maximum areal extent, during the early Permian, these basins covered some 4.5 million km 2. The present outcrop area of Karoo rocks in southern Africa is about 300 000 km 2 with a maximum thickness of some 8000 m. The economic importance of these sediments lies in the vast reserves of coal within the Ecca Group rocks of northern and eastern Transvaal and Natal, South Africa. Large reserves of sandstone-hosted uranium and molybdenum have been proven within the Beaufort Group rocks of the southern Karoo trough, although they are not mineable in the present market conditions. Palaeoenvironmental analysis of the major stratigraphic units of the Karoo succession in South Africa demonstrates the changes in depositional style caused by regional and localized tectonism within the basin. These depocentres were influenced by a progressive aridification of climate which was primarily caused by the northward drift of southwestern Gondwana out of a polar climate and accentuated by the meteoric drying effect of the surrounding land masses. Changing palaeoenvironments clearly influenced the rate and direction of vertebrate evolution in southern Gondwana as evidenced by the numerous reptile fossils, including dinosaurs, which are found in the Karoo strata of South Africa, Lesotho, Namibia and Zimbabwe. During the Late Carboniferous the southern part of Gondwana migrated over the South Pole resulting in a major ice sheet over the early Karoo basin and surrounding highlands. Glacial sedimentation in upland valleys and on the lowland shelf resulted in the Dwyka Formation at the base of the Karoo Sequence. After glaciation, an extensive shallow sea covered the gently subsiding shelf, fed by large volumes of meltwater. Marine clays and muds accumulated under cool climatic conditions (Lower Ecca Group) including the distinctive Mesosaurus-bearing carbonaceous shales of the Whitehill Formation. Subduction of the palaeo-Pacific plate reslted in an extensive chain of mountains which deformed and later truncated the southern rim of the main Karoo Basin. Material derived from these "Gondwanide" mountains as well as from the granitic uplands to the north-east, accumulated in large deltas that prograded into the Ecca sea (Upper Ecca Group). The relatively cool and humid climate promoted thick accumulations of peat on the fluvial and delta plains which now constitute the major coal reserves of southern Africa. As the prograding deltas coalesced, fluvio-lacustrine sediments of the Beaufort Group were laid down on broad gently subsiding alluvial plains. The climate by this time (Late Permian) had warmed to become semi-arid with highly seasonal rainfall. Vegetation alongside the meander belts and semi-permanent lakes supported a diverse reptilian fauna dominated by therapsids or "mammal-like reptiles". Pulses of uplift in the southern source areas combined with possible orographic effects resulted in the progadation of two coarse-grained alluvial fans into the central parts of the basin (Katberg Sandstone Member and Molteno Formation). In the upper Karoo Sequence, progressive aridification and tectonic deformation of the basin through the late Triassic and early Jurassic led to the accumulation, in four separate depositories, of "redbeds" which are interpreted as fluvial and flood-fan, playa and dune complexes (Elliot Formation). This eventually gave way to westerly wind-dominated sedimentation that choked the remaining depositories with fine-grained dune sand. The interdune areas were damp and occasionally flooded and provided a habitat for small dinosaurs and the earliest mammals. During this time (Early Jurassic), basinwide volcanic activity began as a precursor to the break-up of Gondwana in the late Jurassic and continued until the early Cretaceous. This extrusion of extensive flood basalts (Drakensberg Group) onto the Clarens landscape eventually brought Karoo sedimentation to a close.
EDITORIAL Proceedings of the XIV International Conference on Small-Angle Scattering, SAS-2009
NASA Astrophysics Data System (ADS)
Ungar, Goran; Heenan, Richard
2010-10-01
There are 52 papers in these Proceedings. The papers are divided into 10 thematic sections and a section for invited papers and reviews. The sections and the respective section editors are given below. Section Editor(s) Invited Papers and Reviews Peter Griffiths, Wim Bras, Rudolf Winter Beamlines and Instrumentation Elliot Gilbert, Wim Bras, Nigel Rhodes Theory, Data processing and Modelling Jan Skov Pedersen, Carlo Knupp Biological Systems and Membranes Richard Heenan, Cameron Neylon Ceramics, Glasses and Porous Materials Rudolf Winter Colloids and Solutions Peter Griffiths Hierarchical Structures and Fibres Steve Eichhorn, Karen Edler Metallic and Magnetic Systems Armin Hoell Polymers Patrick Fairclough Time resolved Diffraction, Kinetic and Dynamical Studies João Cabral, Christoph Rau We are grateful to all section editors and the many anonymous referees for their invaluable effort which made the publication of the Proceedings possible. The refereeing process was strict and thorough, some papers were rejected and most were improved. The resulting compendium gives a good overview of recent developments in small-angle X-ray and neutron scattering theory, application, methods of analysis and instrumentation. Thus it should be a useful source of reference for a number of years to come. The papers are a good reflection of the material presented at the meeting. Because of the general high quality of the articles, it was difficult to decide which to highlight and be fair to all contributors. The following in particular have caught the attention of the editors. Highlighted papers A statistical survey of publications reporting the application of SAXS and SANS by Aldo Craievich (paper 012003) is recommended reading for anyone needing convincing about the vibrancy of this scientific field and the ever expanding use of these techniques. Two aspects of coherent X-ray scattering, made available by the advent of the 3rd generation synchrotron sources, are discussed in the papers by Felisa Berenguer and the Diamond/UCL team (012004) and by Birgit Fischer and the DESY/Rostock team (012026). The former describe their effort to reconstruct the image of wet collagen tissue from the speckled pattern of a narrow transmitted beam and complement the results of imaging by scanning SAXS. Fischer et al. utilize the coherence of the X-ray beam in the photon correlation spectroscopy mode to determine the dynamic structure factor, from which the q-dependence of the relaxation times was determined for solutions of charged colloidal particles. The nature of particle motion was thus determined, i.e. ballistic at lower concentrations vs. restricted by 'caging' at higher concentrations. The use of scanning microbeam diffraction was also described in the paper by the group of Peter Fratzl (012031), who combined it with tilting the sample in order to obtain what is referred to as 3D SAXS. In this way the orientation distribution of crystallites in callus tissue during bone fracture healing can be explored. Naoto Yagi et al. (012024) used microbeam SAXS and WAXS to study dental enamel crystals in a caries lesion. Anomalous, or resonant, X-ray scattering at low angles (ASAXS) was employed by Guenter Goerigk and Norbert Mattern (Jülich/Dresden) (012022) to study phase separation in the ternary alloy Ni-Nb-Y. Experiments were performed near the K-absorption edges of the three elements and the authors describe how they determined quantitatively the chemical composition of the different separate phases as well as short-range concentration fluctuations during spinodal decomposition. For organic materials, where heavy atoms are not present, typically sulphur would be replaced by Se, so ASAXS could be performed near the Se K-edge. However, Masashi Handa et al. from the University of Tokyo report in paper 012006 a feasibility study of using sulphur itself as the label. They deal with the challenging task of using low-energy X-rays at the sulphur K-edge. Alexandra Vasilieva and teams from St Petersburg and Moscow describe in paper 012029 a study of a FCC/HCP photonic crystal with a unit cell of 650 nm using the DUBBLE beamline at ESRF, where the incident beam is focused with a compound Berilium lens to give divergence in the microradian range. Defects, correlation lengths, mosaic spread and twinning could be studied in a thin film. The use of grazing incidence small- and intermediate-angle X-ray scattering (GISAXS) was illustrated on several examples of complex liquid crystal phases and nanoparticle systems by Xiangbing Zeng and the team from Sheffield in paper 012032. Apart from the intrinsic interest in the structure of thin films of these materials, the close proximity of the surface imposes usually a very high degree of preferred orientation, facilitating structural analysis. From the papers describing the use of neutron scattering, we highlight the study by Léon van Heijkamp and the Delft group, paper 012016, which describes proof-of-principle experiments using spin-echo SANS (SESANS) for the study of D2O-labeled liposomes and their destruction with a view of providing a nondestructive technique for monitoring targeted drug delivery for the destruction of tumor tissue. This technique is based on monitoring the decay of spin polarization of scattered neutrons, and it can probe correlation lengths from nanometres to tens of microns. In paper 012002 Thomas Zemb discusses the various ways in which SAXS and SANS data can be combined to investigate microemulsions of surfactants lacking long-range order. Thus, for example, it is described how form and structure factors can be separated, or how the degree of connectivity can be determined between rod-like entities forming 'molten cubic' or 'sponge' phases. In the Theory, Data Processing and Modelling section we highlight two papers. Paper 012012, by Carlos Cabrillo and co-workers from Madrid, presents a real-space model for powder samples of relatively highly ordered colloidal particles based on the radial pair distance distribution function. The model takes account of different types of disorder, i.e. of both 1st ('thermal') and 2nd ('paracrystalline') kind, as well as finite size, stacking and orientational disorder. In paper 012014 Salvino Ciccariello works out the 3D correlation function of plane objects, specifically a triangle of a general shape. This paves the way to analysing morphologies that can be approximated by cylinders of different cross-sections. Papers 012051 (Polte et al.) and 012047 (Bras et al.) report the use of SAS in combination with X-ray spectroscopy. Ristic et al. describe a real-time study of crystal nucleation induced by ultrasound (012049), while Marianne Imperor-Clerc et al. present an investigation of lyotropic liquid crystals under shear (012052). In the Polymers section Geoff Mitchell describes modelling of SANS data on electrospun fibres (012042), while Cabral et al. describes a study of nanoparticle aggregation in bulk and thin polymer films (012046). Work on phospholipid membranes was reported by Spinozzi et al. and by Onai et al. in papers 012019 and 012018, respectively. The latter deals with the effect of osmotic pressure on model 'lipid rafts', these being considered as having an important role in the function of the mammalian cell membrane. Readers interested in biological systems will find a number of other interesting papers describing the use of either SAXS or SANS in the Biological Systems and Membranes section. A number of descriptions of recent designs of SAXS (synchrotron) and SANS beamlines, as well as detector developments can be found in the section on Beamlines and Instrumentation. Goran Ungar, Editor-in-Chief Richard Heenan, Deputy Editor-in-Chief
Double Planet Meets Triple Star
NASA Astrophysics Data System (ADS)
2002-08-01
High-Resolution VLT Image of Pluto Event on July 20, 2002 A rare celestial phenomenon involving the distant planet Pluto has occurred twice within the past month. Seen from the Earth, this planet moved in front of two different stars on July 20 and August 21, respectively, providing observers at various observatories in South America and in the Pacific area with a long awaited and most welcome opportunity to learn more about the tenuous atmosphere of that cold planet. On the first date, a series of very sharp images of a small sky field with Pluto and the star was obtained with the NAOS-CONICA (NACO) adaptive optics (AO) camera mounted on the ESO VLT 8.2-m YEPUN telescope at the Paranal Observatory. With a diameter of about 2300 km, Pluto is about six times smaller than the Earth. Like our own planet, it possesses a relatively large moon, Charon , measuring 1200 km across and circling Pluto at a distance of about 19,600 km once every 6.4 days. In fact, because of the similarity of the two bodies, the Pluto-Charon system is often referred to as a double planet . At the current distance of nearly 4,500 million km from the Earth, Pluto's disk subtends a very small angle in the sky, 0.107 arcsec. It is therefore very seldom that Pluto - during its orbital motion - passes exactly in front of a comparatively bright star. Such events are known as "occultations" , and it is difficult to predict exactly when and where on the Earth's surface they are visible. Stellar occultations When Pluto moves in front of a star, it casts a "shadow" on the Earth's surface within which an observer cannot see the star, much like the Earth's Moon hides the Sun during a total solar eclipse. During the occultation event, Pluto's "shadow" also moves across the Earth's surface. The width of this shadow is equal to Pluto's diameter, i.e. about 2300 km. One such occultation event was observed in 1988, and two others were expected to occur in 2002, according to predictions published in 2000 by American astronomers Steve W. McDonald and James L. Elliot (Massachussetts Institute of Technology [MIT], Cambridge, USA). Further refinements provided by other observers later showed that the first event would be visible from South America on July 20, 2002 , while a second one on August 21 was expected to be observable in the Pacific basin, from the western coast of North America down to Hawaii and New Zealand. A stellar occultation provides a useful opportunity to study the planetary atmosphere, by means of accurate photometric measurements of the dimming of the stellar light, as the planet moves in front of the star. The observed variation of the light intensity and colour provides crucial information about the structure (atmospheric layers) and composition of different gases and aerosols. More information is available in the Appendix below. The July 20 occultation ESO PR Photo 21a/02 ESO PR Photo 21a/02 [Preview - JPEG: 400 x 477 pix - 65k] [Normal - JPEG: 800 x 953 pix - 224k] Caption : PR Photo 21c/02 shows the path of Pluto's shadow (grey region) during the July 20, 2002 occultation. The shadow has a diameter of about 2300 km and moves from right to left; the timings along the central line are indicated in one-minute intervals (Universal Time - UT). The width of the gray area corresponds to the regions where more than 50% of the light from the star P126 A was attenuated by Pluto's atmosphere. The dotted lines indicate where the stellar flux was attenuated by more than 10%. The maximum duration of the occultation (for observers located at the middle of the shadow track) was about 3 min. The plot is based on astrometric measurements posted at the MIT site. Once completely analyzed, the VLT NACO images will provide significantly better accuracy on the location of this track and therefore a solid basis for the interpretation of the photometric observations obtained with other telescopes. In order to profit from the rare opportunity to learn more about Pluto and its atmosphere, a large campaign involving more than twenty scientists and engineers from the Paris Observatory and associated institutions [1] was organized to observe the July 20, 2002, event involving an occultation of a star of visual magnitude 11 (i.e., about 100 times fainter than what can be perceived with then unaided eye), referred to as "P126" in McDonald and Elliot's catalogue. In May 2002, preparatory observations showed that star to be double, with the brighter component of the system ( "P126 A" ) being likely to be occulted by Pluto, as seen from South America. However, because of the duplicity, the predictions of exactly where the shadow of Pluto would sweep the ground were uncertain by about 0.1 arcsec in the sky, corresponding to more than 2000 km on the ground. The NACO images ESO PR Photo 21b/02 ESO PR Photo 21b/02 [Preview - JPEG: 400 x 469 pix - 47k] [Normal - JPEG: 800 x 937 pix - 208k] ESO PR Photo 21c/02 ESO PR Photo 21c/02 [Preview - JPEG: 400 x 467 pix - 53k] [Normal - JPEG: 800 x 933 pix - 232k] Caption : PR Photo 21b/02 shows one of the images obtained with the NAOS-CONICA (NACO) adaptive optics (AO) camera mounted on the ESO VLT 8.2-m YEPUN telescope at the Paranal Observatory in connection with a stellar occultation by Pluto on July 20, 2002. The star was found to be triple - the three components (A, B and C), as well as Pluto and its moon, Charon, are indicated in PR Photo 21c/02 for easy orientation. The images are based on data available from the NACO data webpage. See the text for details. In the end, the close approach (an "appulse" in astronomical terminology) of Pluto and P126 A was indeed observed from various sites in South America, with several mobile telescopes and also including major facilities at the ESO La Silla and Paranal Observatories. In particular, unique and very sharp images were obtained with the NAOS-CONICA (NACO) adaptive optics (AO) camera mounted on the ESO VLT 8.2-m YEPUN telescope . One of the NACO images is shown in PR Photos 21b-c/02 . These images were made during the final adjustments of the NACO instrument and in anticipation of the upcoming science verification observations. All frames are now publicly available from the NACO data webpage on the ESO site. The NACO image shown was obtained in infrared light (in the K-band at wavelength 2.2 µm) on July 20, 2002, some 45 min before Pluto's shadow passed north of Paranal ( Photo 21a/02) . The orientation is such that North is up, and East is left. The small sky field measures about 7 x 7 arcsec 2. The pixel size is 0.027 arcsec, and the achieved image sharpness corresponds to the theoretical limit (the diffraction limit) with a telescope of this size and at this wavelength (0.07 arcsec). The object at the centre is the star P126 A of K-magnitude 9.5 (see also Photo 21c/02 where the objects are identified), while the brighter object at the right is the companion star P126 B , 2.25 arcsec away. As P126 B is very red (of stellar type M), it appears brighter than P126 A at this infrared wavelength - the opposite is true in visible light. The intensity of the left part of the image has been multiplied by a factor of approximately 35 in order to better display Pluto and its moon Charon , located some 0.5 arcsec to the lower left (SE) of the planet. Note also the faint star "P126 C" , at this moment very close to Pluto; it is probably a (physical) member of the P126 system. A closer inspection of the original image shows that the disk of Pluto (with a diameter of 0.107 arscec and covering 16 NACO pixels) is (barely) resolved. Many images were taken by NACO prior to the occultation. They will allow to retrace the precise motion of Pluto relative to P126 A, thereby improving the mapping of the motion of Pluto's shadow on the ground, cf. Photo 21a/02 . This is important in order to apply the correct geometrical circumstances for the interpretation of the photometric observations. A first evaluation of the NACO data indicates that the Paranal site "missed" the upper layers of Pluto's atmosphere by a mere 200 km or so - this is equivalent to no more than one hundredth of an arcsec as projected on the sky. More information A full report on the NACO observations and other results by the present group of astronomers, also from the subsequent occultation of another star on August 21, 2002, that was extensively observed with the Canada-France-Hawaii Telescope (CFHT) on Mauna Kea (Hawaii, USA), is available at this URL: http://despa.obspm.fr/~sicardy/pluton/results.html Other sharp NACO images have been published recently, e.g. ESO PR 25/01 , ESO PR Photos 04a-c/02 and ESO PR Photos 19a-c/02. Note [1]: The group from the Observatoire de Paris and other observatories is lead by Bruno Sicardy and also includes François Colas, Thomas Widemann, Françoise Roques, Christian Veillet, Jean-Charles Cuillandre, Wolfgang Beisker, Cyril Birnbaum, Kate Brooks, Audrey Delsanti, Pierre Drossart, Agnès Fienga, Eric Gendron, Mike Kretlow, Anne-Marie Lagrange, Jean Lecacheux, Emmanuel Lellouch, Cédric Leyrat, Alain Maury, Elisabeth Raynaud, Michel Rapaport, Stefan Renner and Mathias Schultheis . From ESO participated Nancy Ageorges, Olivier Hainaut, Chris Lidman and Jason Spyromilio . Contact Bruno Sicardy LESIA - Observatoire de Paris France Phone: +33-1-45 07 71 15 email: bruno.sicardy@obspm.fr Appendix: Stellar occultations and Pluto's atmosphere Stellar occultations are presently the only way to probe Pluto's tenuous atmosphere . When the star moves behind the planet, the stellar rays suffer minute deviations as they are refracted (i.e., bent and defocussed) by the planet's atmospheric layers. This effect, together with the large distance to the planet, manifests itself as a gradual decline of observed intensity of the stellar light, rather than an abrupt drop as this would be the case if the planet had no atmosphere. Pluto's atmosphere was first detected on August 19, 1985, during a stellar occultation observed from Israel and then studied in more detail from Australia and from the Kuiper Airborne Observatory (KAO) during another such event on July 9, 1988. However, Pluto's atmosphere is still not well understood. It appears to be mostly composed of a dominant gas of atomic weight 28, probably molecular nitrogen (N 2 ). Near-IR solar reflection spectra have since shown a small presence of methane (CH 4 ), probably at a level of about 1% relative to nitrogen. The 1988 occultation clearly revealed two different layers in Pluto's atmosphere, a rather smooth and isothermal outer part, and a more complex one near the planet's surface, with the possible presence of an inversion layer (in which the temperature increases with altitude) or possibly haze of photochemical origin. The present observations aimed at discriminating between the current theoretical models of Pluto's atmosphere by means of detailed measurements of the changing intensity and colour of the stellar light, as the star is seen through progressively lower layers of the planet's atmosphere. Another important issue is the question of whether Pluto's atmosphere has changed since 1988. In the intervening 14 years, the planet has moved away from the Sun in its elliptic orbit, whereby there has been a change in the insolation (solar flux) of about 6%. This effect might possibly have caused changes in the surface temperature and in the overall atmospheric structure of Pluto. However, any firm conclusions will have to await a complete and careful evaluation of all available observations. ESO PR Photos 21a-c/02 may be reproduced, if credit is given to the European Southern Observatory (ESO).
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
Hong, Sehee; Kim, Soyoung
2018-01-01
There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.
[Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].
Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping
2014-06-01
The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.
1992-12-01
suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model
Experience with turbulence interaction and turbulence-chemistry models at Fluent Inc.
NASA Technical Reports Server (NTRS)
Choudhury, D.; Kim, S. E.; Tselepidakis, D. P.; Missaghi, M.
1995-01-01
This viewgraph presentation discusses (1) turbulence modeling: challenges in turbulence modeling, desirable attributes of turbulence models, turbulence models in FLUENT, and examples using FLUENT; and (2) combustion modeling: turbulence-chemistry interaction and FLUENT equilibrium model. As of now, three turbulence models are provided: the conventional k-epsilon model, the renormalization group model, and the Reynolds-stress model. The renormalization group k-epsilon model has broadened the range of applicability of two-equation turbulence models. The Reynolds-stress model has proved useful for strongly anisotropic flows such as those encountered in cyclones, swirlers, and combustors. Issues remain, such as near-wall closure, with all classes of models.
ERIC Educational Resources Information Center
Freeman, Thomas J.
This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…
SUMMA and Model Mimicry: Understanding Differences Among Land Models
NASA Astrophysics Data System (ADS)
Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.
2016-12-01
Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.
Seven Modeling Perspectives on Teaching and Learning: Some Interrelations and Cognitive Effects
ERIC Educational Resources Information Center
Easley, J. A., Jr.
1977-01-01
The categories of models associated with the seven perspectives are designated as combinatorial models, sampling models, cybernetic models, game models, critical thinking models, ordinary language analysis models, and dynamic structural models. (DAG)
NASA Astrophysics Data System (ADS)
Clark, Martyn; Essery, Richard
2017-04-01
When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
ERIC Educational Resources Information Center
Thelen, Mark H.; And Others
1977-01-01
Assesses the influence of model consequences on perceived model affect and, conversely, assesses the influence of model affect on perceived model consequences. Also appraises the influence of model consequences and model affect on perceived model attractiveness, perceived model competence, and perceived task attractiveness. (Author/RK)
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.
2015-12-01
Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.
NASA Astrophysics Data System (ADS)
Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.
2014-12-01
Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.
Literature review of models on tire-pavement interaction noise
NASA Astrophysics Data System (ADS)
Li, Tan; Burdisso, Ricardo; Sandu, Corina
2018-04-01
Tire-pavement interaction noise (TPIN) becomes dominant at speeds above 40 km/h for passenger vehicles and 70 km/h for trucks. Several models have been developed to describe and predict the TPIN. However, these models do not fully reveal the physical mechanisms or predict TPIN accurately. It is well known that all the models have both strengths and weaknesses, and different models fit different investigation purposes or conditions. The numerous papers that present these models are widely scattered among thousands of journals, and it is difficult to get the complete picture of the status of research in this area. This review article aims at presenting the history and current state of TPIN models systematically, making it easier to identify and distribute the key knowledge and opinions, and providing insight into the future research trend in this field. In this work, over 2000 references related to TPIN were collected, and 74 models were reviewed from nearly 200 selected references; these were categorized into deterministic models (37), statistical models (18), and hybrid models (19). The sections explaining the models are self-contained with key principles, equations, and illustrations included. The deterministic models were divided into three sub-categories: conventional physics models, finite element and boundary element models, and computational fluid dynamics models; the statistical models were divided into three sub-categories: traditional regression models, principal component analysis models, and fuzzy curve-fitting models; the hybrid models were divided into three sub-categories: tire-pavement interface models, mechanism separation models, and noise propagation models. At the end of each category of models, a summary table is presented to compare these models with the key information extracted. Readers may refer to these tables to find models of their interest. The strengths and weaknesses of the models in different categories were then analyzed. Finally, the modeling trend and future direction in this area are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Expert models and modeling processes associated with a computer-modeling tool
NASA Astrophysics Data System (ADS)
Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.
2006-07-01
Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.
Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis
2017-02-01
Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Conceptual and logical level of database modeling
NASA Astrophysics Data System (ADS)
Hunka, Frantisek; Matula, Jiri
2016-06-01
Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; ...
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
NASA Astrophysics Data System (ADS)
Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian
2016-04-01
Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.
Object-oriented biomedical system modelling--the language.
Hakman, M; Groth, T
1999-11-01
The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.
ERIC Educational Resources Information Center
Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce
2011-01-01
This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
An empirical model to forecast solar wind velocity through statistical modeling
NASA Astrophysics Data System (ADS)
Gao, Y.; Ridley, A. J.
2013-12-01
The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.
A Primer for Model Selection: The Decisive Role of Model Complexity
NASA Astrophysics Data System (ADS)
Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang
2018-03-01
Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
Women's Endorsement of Models of Sexual Response: Correlates and Predictors.
Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert
2016-02-01
Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.
The Use of Modeling-Based Text to Improve Students' Modeling Competencies
ERIC Educational Resources Information Center
Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan
2015-01-01
This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Lu, Dan; Ye, Ming; Curtis, Gary P.
2015-08-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
Airborne Wireless Communication Modeling and Analysis with MATLAB
2014-03-27
research develops a physical layer model that combines antenna modeling using computational electromagnetics and the two-ray propagation model to...predict the received signal strength. The antenna is modeled with triangular patches and analyzed by extending the antenna modeling algorithm by Sergey...7 2.7. Propagation Modeling : Statistical Models ............................................................8 2.8. Antenna Modeling
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
ERIC Educational Resources Information Center
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.
Jenness, Samuel M; Goodreau, Steven M; Morris, Martina
2018-04-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks
Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina
2018-01-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
A composite computational model of liver glucose homeostasis. I. Building the composite model.
Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A
2012-04-07
A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.
NASA Technical Reports Server (NTRS)
Kral, Linda D.; Ladd, John A.; Mani, Mori
1995-01-01
The objective of this viewgraph presentation is to evaluate turbulence models for integrated aircraft components such as the forebody, wing, inlet, diffuser, nozzle, and afterbody. The one-equation models have replaced the algebraic models as the baseline turbulence models. The Spalart-Allmaras one-equation model consistently performs better than the Baldwin-Barth model, particularly in the log-layer and free shear layers. Also, the Sparlart-Allmaras model is not grid dependent like the Baldwin-Barth model. No general turbulence model exists for all engineering applications. The Spalart-Allmaras one-equation model and the Chien k-epsilon models are the preferred turbulence models. Although the two-equation models often better predict the flow field, they may take from two to five times the CPU time. Future directions are in further benchmarking the Menter blended k-w/k-epsilon and algorithmic improvements to reduce CPU time of the two-equation model.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
BioModels: expanding horizons to include more modelling approaches and formats
Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi
2018-01-01
Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614
NASA Astrophysics Data System (ADS)
Justi, Rosária S.; Gilbert, John K.
2002-04-01
In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.
Aspinall, Richard
2004-08-01
This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed based on hypothesised relationships based on consideration of underlying and proximate causes of change, and (iv) that models are not invariant over time.
NASA Astrophysics Data System (ADS)
Aktan, Mustafa B.
The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of science teaching and teacher education programs are discussed. Directions for future research are suggested. Overall, based on the results, I suggest that prospective science teachers should engage in more modeling activities through their preparation programs, gain more modeling experience, and collaborate with their colleagues to better understand and implement scientific models in science teaching.
Validation of Groundwater Models: Meaningful or Meaningless?
NASA Astrophysics Data System (ADS)
Konikow, L. F.
2003-12-01
Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.
Royle, J. Andrew; Dorazio, Robert M.
2008-01-01
A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.
Using the Model Coupling Toolkit to couple earth system models
Warner, J.C.; Perlin, N.; Skyllingstad, E.D.
2008-01-01
Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Premium analysis for copula model: A case study for Malaysian motor insurance claims
NASA Astrophysics Data System (ADS)
Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah
2014-06-01
This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.
2006-03-01
models, the thesis applies a biological model, the Lotka - Volterra predator- prey model, to a highly suggestive case study, that of the Irish Republican...Model, Irish Republican Army, Sinn Féin, Lotka - Volterra Predator Prey Model, Recruitment, British Army 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...weaknesses of sociological and biological models, the thesis applies a biological model, the Lotka - Volterra predator-prey model, to a highly suggestive
Right-Sizing Statistical Models for Longitudinal Data
Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.
2015-01-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Examination of various turbulence models for application in liquid rocket thrust chambers
NASA Technical Reports Server (NTRS)
Hung, R. J.
1991-01-01
There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.
Lv, Yan; Yan, Bin; Wang, Lin; Lou, Dong-hua
2012-04-01
To analyze the reliability of the dento-maxillary models created by cone-beam CT and rapid prototyping (RP). Plaster models were obtained from 20 orthodontic patients who had been scanned by cone-beam CT and 3-D models were formed after the calculation and reconstruction of software. Then, computerized composite models (RP models) were produced by rapid prototyping technique. The crown widths, dental arch widths and dental arch lengths on each plaster model, 3-D model and RP model were measured, followed by statistical analysis with SPSS17.0 software package. For crown widths, dental arch lengths and crowding, there were significant differences(P<0.05) among the 3 models, but the dental arch widths were on the contrary. Measurements on 3-D models were significantly smaller than those on other two models(P<0.05). Compared with 3-D models, RP models had more numbers which were not significantly different from those on plaster models(P>0.05). The regression coefficient among three models were significantly different(P<0.01), ranging from 0.8 to 0.9. But between RP and plaster models was bigger than that between 3-D and plaster models. There is high consistency within 3 models, while some differences were accepted in clinic. Therefore, it is possible to substitute 3-D and RP models for plaster models in order to save storage space and improve efficiency.
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2013-12-01
Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Dungan, Jennifer L.
1997-01-01
In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
Bayesian Data-Model Fit Assessment for Structural Equation Modeling
ERIC Educational Resources Information Center
Levy, Roy
2011-01-01
Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…
Evolution of computational models in BioModels Database and the Physiome Model Repository.
Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar
2018-04-12
A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.
NASA Astrophysics Data System (ADS)
Li, J.
2017-12-01
Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.
Computational Models for Calcium-Mediated Astrocyte Functions.
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes.
Computational Models for Calcium-Mediated Astrocyte Functions
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes. PMID:29670517
Breuer, L.; Huisman, J.A.; Willems, P.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.
2009-01-01
This paper introduces the project on 'Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM)' that aims at investigating the envelope of predictions on changes in hydrological fluxes due to land use change. As part of a series of four papers, this paper outlines the motivation and setup of LUCHEM, and presents a model intercomparison for the present-day simulation results. Such an intercomparison provides a valuable basis to investigate the effects of different model structures on model predictions and paves the ground for the analysis of the performance of multi-model ensembles and the reliability of the scenario predictions in companion papers. In this study, we applied a set of 10 lumped, semi-lumped and fully distributed hydrological models that have been previously used in land use change studies to the low mountainous Dill catchment, Germany. Substantial differences in model performance were observed with Nash-Sutcliffe efficiencies ranging from 0.53 to 0.92. Differences in model performance were attributed to (1) model input data, (2) model calibration and (3) the physical basis of the models. The models were applied with two sets of input data: an original and a homogenized data set. This homogenization of precipitation, temperature and leaf area index was performed to reduce the variation between the models. Homogenization improved the comparability of model simulations and resulted in a reduced average bias, although some variation in model data input remained. The effect of the physical differences between models on the long-term water balance was mainly attributed to differences in how models represent evapotranspiration. Semi-lumped and lumped conceptual models slightly outperformed the fully distributed and physically based models. This was attributed to the automatic model calibration typically used for this type of models. Overall, however, we conclude that there was no superior model if several measures of model performance are considered and that all models are suitable to participate in further multi-model ensemble set-ups and land use change scenario investigations. ?? 2008 Elsevier Ltd. All rights reserved.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Modeling uncertainty: quicksand for water temperature modeling
Bartholow, John M.
2003-01-01
Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.
Energy modeling. Volume 2: Inventory and details of state energy models
NASA Astrophysics Data System (ADS)
Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.
1981-05-01
An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.
NASA Astrophysics Data System (ADS)
Peckham, Scott
2016-04-01
Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders, time interpolators and unit converters) as necessary to mediate the differences between them so they can work together. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model or data set to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. Recent efforts to bring powerful uncertainty analysis and inverse modeling toolkits such as DAKOTA into modeling frameworks will also be described. This talk will conclude with an overview of several related modeling projects that have been funded by NSF's EarthCube initiative, namely the Earth System Bridge, OntoSoft and GeoSemantics projects.
[A review on research of land surface water and heat fluxes].
Sun, Rui; Liu, Changming
2003-03-01
Many field experiments were done, and soil-vegetation-atmosphere transfer(SVAT) models were stablished to estimate land surface heat fluxes. In this paper, the processes of experimental research on land surface water and heat fluxes are reviewed, and three kinds of SVAT model(single layer model, two layer model and multi-layer model) are analyzed. Remote sensing data are widely used to estimate land surface heat fluxes. Based on remote sensing and energy balance equation, different models such as simplified model, single layer model, extra resistance model, crop water stress index model and two source resistance model are developed to estimate land surface heat fluxes and evapotranspiration. These models are also analyzed in this paper.
Examination of simplified travel demand model. [Internal volume forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.L. Jr.; McFarlane, W.J.
1978-01-01
A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less
MPTinR: analysis of multinomial processing tree models in R.
Singmann, Henrik; Kellen, David
2013-06-01
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Understanding and Predicting Urban Propagation Losses
2009-09-01
6. Extended Hata Model ..........................22 7. Modified Hata Model ..........................22 8. Walfisch – Ikegami Model...39 4. COST (Extended) Hata Model ...................40 5. Modified Hata Model ..........................41 6. Walfisch- Ikegami Model...47 1. Scenario One – Walfisch- Ikegami Model ........51 2. Scenario Two – Modified Hata Model ...........52 3. Scenario Three – Urban Hata
A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016
A framework for sharing and integrating remote sensing and GIS models based on Web service.
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Timmermans, Harry
2011-06-01
Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.
The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.
Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e,more » MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.« less
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
Comparison of dark energy models after Planck 2015
NASA Astrophysics Data System (ADS)
Xu, Yue-Yao; Zhang, Xin
2016-11-01
We make a comparison for ten typical, popular dark energy models according to their capabilities of fitting the current observational data. The observational data we use in this work include the JLA sample of type Ia supernovae observation, the Planck 2015 distance priors of cosmic microwave background observation, the baryon acoustic oscillations measurements, and the direct measurement of the Hubble constant. Since the models have different numbers of parameters, in order to make a fair comparison, we employ the Akaike and Bayesian information criteria to assess the worth of the models. The analysis results show that, according to the capability of explaining observations, the cosmological constant model is still the best one among all the dark energy models. The generalized Chaplygin gas model, the constant w model, and the α dark energy model are worse than the cosmological constant model, but still are good models compared to others. The holographic dark energy model, the new generalized Chaplygin gas model, and the Chevalliear-Polarski-Linder model can still fit the current observations well, but from an economically feasible perspective, they are not so good. The new agegraphic dark energy model, the Dvali-Gabadadze-Porrati model, and the Ricci dark energy model are excluded by the current observations.
Parametric regression model for survival data: Weibull regression model as an example
2016-01-01
Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846
Inner Magnetosphere Modeling at the CCMC: Ring Current, Radiation Belt and Magnetic Field Mapping
NASA Astrophysics Data System (ADS)
Rastaetter, L.; Mendoza, A. M.; Chulaki, A.; Kuznetsova, M. M.; Zheng, Y.
2013-12-01
Modeling of the inner magnetosphere has entered center stage with the launch of the Van Allen Probes (RBSP) in 2012. The Community Coordinated Modeling Center (CCMC) has drastically improved its offerings of inner magnetosphere models that cover energetic particles in the Earth's ring current and radiation belts. Models added to the CCMC include the stand-alone Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model by M.C. Fok, the Rice Convection Model (RCM) by R. Wolf and S. Sazykin and numerous versions of the Tsyganenko magnetic field model (T89, T96, T01quiet, TS05). These models join the LANL* model by Y. Yu hat was offered for instant run earlier in the year. In addition to these stand-alone models, the Comprehensive Ring Current Model (CRCM) by M.C. Fok and N. Buzulukova joined as a component of the Space Weather Modeling Framework (SWMF) in the magnetosphere model run-on-request category. We present modeling results of the ring current and radiation belt models and demonstrate tracking of satellites such as RBSP. Calculations using the magnetic field models include mappings to the magnetic equator or to minimum-B positions and the determination of foot points in the ionosphere.
Kim, Steven B; Kodell, Ralph L; Moon, Hojin
2014-03-01
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.
Joe H. Scott; Robert E. Burgan
2005-01-01
This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.
Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing
2017-06-18
The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.
Implementation of Dryden Continuous Turbulence Model into Simulink for LSA-02 Flight Test Simulation
NASA Astrophysics Data System (ADS)
Ichwanul Hakim, Teuku Mohd; Arifianto, Ony
2018-04-01
Turbulence is a movement of air on small scale in the atmosphere that caused by instabilities of pressure and temperature distribution. Turbulence model is integrated into flight mechanical model as an atmospheric disturbance. Common turbulence model used in flight mechanical model are Dryden and Von Karman model. In this minor research, only Dryden continuous turbulence model were made. Dryden continuous turbulence model has been implemented, it refers to the military specification MIL-HDBK-1797. The model was implemented into Matlab Simulink. The model will be integrated with flight mechanical model to observe response of the aircraft when it is flight through turbulence field. The turbulence model is characterized by multiplying the filter which are generated from power spectral density with band-limited Gaussian white noise input. In order to ensure that the model provide a good result, model verification has been done by comparing the implemented model with the similar model that is provided in aerospace blockset. The result shows that there are some difference for 2 linear velocities (vg and wg), and 3 angular rate (pg, qg and rg). The difference is instantly caused by different determination of turbulence scale length which is used in aerospace blockset. With the adjustment of turbulence length in the implemented model, both model result the similar output.
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.
2017-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.
Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R
2016-07-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
NASA Technical Reports Server (NTRS)
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.;
2016-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...
2016-08-22
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theurich, Gerhard; DeLuca, C.; Campbell, T.
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
An ontology for component-based models of water resource systems
NASA Astrophysics Data System (ADS)
Elag, Mostafa; Goodall, Jonathan L.
2013-08-01
Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.
Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah
2018-07-01
In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Exploring Several Methods of Groundwater Model Selection
NASA Astrophysics Data System (ADS)
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Models Archive and ModelWeb at NSSDC
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; King, J. H.
2002-05-01
In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.
NASA Astrophysics Data System (ADS)
Knoben, Wouter; Woods, Ross; Freer, Jim
2016-04-01
Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
Brewer, Shannon K.; Worthington, Thomas; Mollenhauer, Robert; Stewart, David; McManamay, Ryan; Guertault, Lucie; Moore, Desiree
2018-01-01
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio‐economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models, 43 were commonly applied due to their versatility, accessibility, user‐friendliness, and excellent user‐support. Forty‐one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user‐support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user‐friendly forms, increasing user‐support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Nonetheless, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.
Hedenstierna, Sofia; Halldin, Peter
2008-04-15
A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert; ...
2018-04-06
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
Replicating Health Economic Models: Firm Foundations or a House of Cards?
Bermejo, Inigo; Tappenden, Paul; Youn, Ji-Hee
2017-11-01
Health economic evaluation is a framework for the comparative analysis of the incremental health gains and costs associated with competing decision alternatives. The process of developing health economic models is usually complex, financially expensive and time-consuming. For these reasons, model development is sometimes based on previous model-based analyses; this endeavour is usually referred to as model replication. Such model replication activity may involve the comprehensive reproduction of an existing model or 'borrowing' all or part of a previously developed model structure. Generally speaking, the replication of an existing model may require substantially less effort than developing a new de novo model by bypassing, or undertaking in only a perfunctory manner, certain aspects of model development such as the development of a complete conceptual model and/or comprehensive literature searching for model parameters. A further motivation for model replication may be to draw on the credibility or prestige of previous analyses that have been published and/or used to inform decision making. The acceptability and appropriateness of replicating models depends on the decision-making context: there exists a trade-off between the 'savings' afforded by model replication and the potential 'costs' associated with reduced model credibility due to the omission of certain stages of model development. This paper provides an overview of the different levels of, and motivations for, replicating health economic models, and discusses the advantages, disadvantages and caveats associated with this type of modelling activity. Irrespective of whether replicated models should be considered appropriate or not, complete replicability is generally accepted as a desirable property of health economic models, as reflected in critical appraisal checklists and good practice guidelines. To this end, the feasibility of comprehensive model replication is explored empirically across a small number of recent case studies. Recommendations are put forward for improving reporting standards to enhance comprehensive model replicability.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
NASA Astrophysics Data System (ADS)
Oursland, Mark David
This study compared the modeling achievement of students receiving mathematical modeling instruction using the computer microworld, Interactive Physics, and students receiving instruction using physical objects. Modeling instruction included activities where students applied the (a) linear model to a variety of situations, (b) linear model to two-rate situations with a constant rate, (c) quadratic model to familiar geometric figures. Both quantitative and qualitative methods were used to analyze achievement differences between students (a) receiving different methods of modeling instruction, (b) with different levels of beginning modeling ability, or (c) with different levels of computer literacy. Student achievement was analyzed quantitatively through a three-factor analysis of variance where modeling instruction, beginning modeling ability, and computer literacy were used as the three independent factors. The SOLO (Structure of the Observed Learning Outcome) assessment framework was used to design written modeling assessment instruments to measure the students' modeling achievement. The same three independent factors were used to collect and analyze the interviews and observations of student behaviors. Both methods of modeling instruction used the data analysis approach to mathematical modeling. The instructional lessons presented problem situations where students were asked to collect data, analyze the data, write a symbolic mathematical equation, and use equation to solve the problem. The researcher recommends the following practice for modeling instruction based on the conclusions of this study. A variety of activities with a common structure are needed to make explicit the modeling process of applying a standard mathematical model. The modeling process is influenced strongly by prior knowledge of the problem context and previous modeling experiences. The conclusions of this study imply that knowledge of the properties about squares improved the students' ability to model a geometric problem more than instruction in data analysis modeling. The uses of computer microworlds such as Interactive Physics in conjunction with cooperative groups are a viable method of modeling instruction.
A physical data model for fields and agents
NASA Astrophysics Data System (ADS)
de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek
2016-04-01
Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Modeling Information Accumulation in Psychological Tests Using Item Response Times
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2015-01-01
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Climate and atmospheric modeling studies
NASA Technical Reports Server (NTRS)
1992-01-01
The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.
Models in Science Education: Applications of Models in Learning and Teaching Science
ERIC Educational Resources Information Center
Ornek, Funda
2008-01-01
In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…
Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook
NASA Technical Reports Server (NTRS)
Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.
1986-01-01
The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.
Vector models and generalized SYK models
Peng, Cheng
2017-05-23
Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.
Validation of the PVSyst Performance Model for the Concentrix CPV Technology
NASA Astrophysics Data System (ADS)
Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault
2011-12-01
The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.
Comparative Protein Structure Modeling Using MODELLER
Webb, Benjamin; Sali, Andrej
2016-01-01
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406
A comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh
1993-01-01
A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.
2017-07-01
The diversity in hydrologic models has historically led to great controversy on the correct
approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
NASA Astrophysics Data System (ADS)
Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.
2017-12-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.
Analysis of terahertz dielectric properties of pork tissue
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Xie, Qiaoling; Sun, Ping
2017-10-01
Seeing that about 70% component of fresh biological tissues is water, many scientists try to use water models to describe the dielectric properties of biological tissues. The classical water dielectric models are Debye model, Double Debye model and Cole-Cole model. This work aims to determine a suitable model by comparing three models above with experimental data. These models are applied to fresh pork tissue. By means of least square method, the parameters of different models are fitted with the experimental data. Comparing different models on both dielectric function, the Cole-Cole model is verified the best to describe the experiments of pork tissue. The correction factor α of the Cole-Cole model is an important modification for biological tissues. So Cole-Cole model is supposed to be a priority selection to describe the dielectric properties for biological tissues in the terahertz range.
Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model
NASA Astrophysics Data System (ADS)
Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus
2017-12-01
The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2017-06-01
The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The necessity of using such models may change the nature of mathematical modeling in science and, thus, the nature of science, as it happened in the case of Q-models, which not only led to a revolutionary transformation of physics but also opened new possibilities for scientific thinking and mathematical modeling beyond physics.
Vertically-Integrated Dual-Continuum Models for CO2 Injection in Fractured Aquifers
NASA Astrophysics Data System (ADS)
Tao, Y.; Guo, B.; Bandilla, K.; Celia, M. A.
2017-12-01
Injection of CO2 into a saline aquifer leads to a two-phase flow system, with supercritical CO2 and brine being the two fluid phases. Various modeling approaches, including fully three-dimensional (3D) models and vertical-equilibrium (VE) models, have been used to study the system. Almost all of that work has focused on unfractured formations. 3D models solve the governing equations in three dimensions and are applicable to generic geological formations. VE models assume rapid and complete buoyant segregation of the two fluid phases, resulting in vertical pressure equilibrium and allowing integration of the governing equations in the vertical dimension. This reduction in dimensionality makes VE models computationally more efficient, but the associated assumptions restrict the applicability of VE model to formations with moderate to high permeability. In this presentation, we extend the VE and 3D models for CO2 injection in fractured aquifers. This is done in the context of dual-continuum modeling, where the fractured formation is modeled as an overlap of two continuous domains, one representing the fractures and the other representing the rock matrix. Both domains are treated as porous media continua and can be modeled by either a VE or a 3D formulation. The transfer of fluid mass between rock matrix and fractures is represented by a mass transfer function connecting the two domains. We have developed a computational model that combines the VE and 3D models, where we use the VE model in the fractures, which typically have high permeability, and the 3D model in the less permeable rock matrix. A new mass transfer function is derived, which couples the VE and 3D models. The coupled VE-3D model can simulate CO2 injection and migration in fractured aquifers. Results from this model compare well with a full-3D model in which both the fractures and rock matrix are modeled with 3D models, with the hybrid VE-3D model having significantly reduced computational cost. In addition to the VE-3D model, we explore simplifications of the rock matrix domain by using sugar-cube and matchstick conceptualizations and develop VE-dual porosity and VE-matchstick models. These vertically-integrated dual-permeability and dual-porosity models provide a range of computationally efficient tools to model CO2 storage in fractured saline aquifers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. Harrington
2004-10-25
The purpose of this model report is to provide documentation of the conceptual and mathematical model (Ashplume) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. These aspects of volcanism-related dose calculation are described in the context of the entire igneous disruptive events conceptual model in ''Characterize Framework for Igneous Activity'' (BSC 2004 [DIRS 169989], Section 6.1.1). The Ashplume conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through themore » Yucca Mountain repository and downwind transport of contaminated tephra. The Ashplume mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the ground surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report update the previous documentation of the Ashplume mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model. In this report, ''Ashplume'' is used when referring to the atmospheric dispersal model and ''ASHPLUME'' is used when referencing the code of that model. Two analysis and model reports provide direct inputs to this model report, namely ''Characterize Eruptive Processes at Yucca Mountain, Nevada and Number of Waste Packages Hit by Igneous Intrusion''. This model report provides direct inputs to the TSPA, which uses the ASHPLUME software described and used in this model report. Thus, ASHPLUME software inputs are inputs to this model report for ASHPLUME runs in this model report. However, ASHPLUME software inputs are outputs of this model report for ASHPLUME runs by TSPA.« less
Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.
Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong
2007-09-01
Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Huang, Ming Xia; Wang, Jing; Tang, Jian Zhao; Yu, Qiang; Zhang, Jun; Xue, Qing Yu; Chang, Qing; Tan, Mei Xiu
2016-11-18
The suitability of four popular empirical and semi-empirical stomatal conductance models (Jarvis model, Ball-Berry model, Leuning model and Medlyn model) was evaluated based on para-llel observation data of leaf stomatal conductance, leaf net photosynthetic rate and meteorological factors during the vigorous growing period of potato and oil sunflower at Wuchuan experimental station in agro-pastoral ecotone in North China. It was found that there was a significant linear relationship between leaf stomatal conductance and leaf net photosynthetic rate for potato, whereas the linear relationship appeared weaker for oil sunflower. The results of model evaluation showed that Ball-Berry model performed best in simulating leaf stomatal conductance of potato, followed by Leuning model and Medlyn model, while Jarvis model was the last in the performance rating. The root-mean-square error (RMSE) was 0.0331, 0.0371, 0.0456 and 0.0794 mol·m -2 ·s -1 , the normalized root-mean-square error (NRMSE) was 26.8%, 30.0%, 36.9% and 64.3%, and R-squared (R 2 ) was 0.96, 0.61, 0.91 and 0.88 between simulated and observed leaf stomatal conductance of potato for Ball-Berry model, Leuning model, Medlyn model and Jarvis model, respectively. For leaf stomatal conductance of oil sunflower, Jarvis model performed slightly better than Leuning model, Ball-Berry model and Medlyn model. RMSE was 0.2221, 0.2534, 0.2547 and 0.2758 mol·m -2 ·s -1 , NRMSE was 40.3%, 46.0%, 46.2% and 50.1%, and R 2 was 0.38, 0.22, 0.23 and 0.20 between simulated and observed leaf stomatal conductance of oil sunflower for Jarvis model, Leuning model, Ball-Berry model and Medlyn model, respectively. The path analysis was conducted to identify effects of specific meteorological factors on leaf stomatal conductance. The diurnal variation of leaf stomatal conductance was principally affected by vapour pressure saturation deficit for both potato and oil sunflower. The model evaluation suggested that the stomatal conductance models for oil sunflower are to be improved in further research.
Evaluation of chiller modeling approaches and their usability for fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are themore » Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
PyMT: A Python package for model-coupling in the Earth sciences
NASA Astrophysics Data System (ADS)
Hutton, E.
2016-12-01
The current landscape of Earth-system models is not only broad in scientific scope, but also broad in type. On the one hand, the large variety of models is exciting, as it provides fertile ground for extending or linking models together in novel ways to answer new scientific questions. However, the heterogeneity in model type acts to inhibit model coupling, model development, or even model use. Existing models are written in a variety of programming languages, operate on different grids, use their own file formats (both for input and output), have different user interfaces, have their own time steps, etc. Each of these factors become obstructions to scientists wanting to couple, extend - or simply run - existing models. For scientists whose main focus may not be computer science these barriers become even larger and become significant logistical hurdles. And this is all before the scientific difficulties of coupling or running models are addressed. The CSDMS Python Modeling Toolkit (PyMT) was developed to help non-computer scientists deal with these sorts of modeling logistics. PyMT is the fundamental package the Community Surface Dynamics Modeling System uses for the coupling of models that expose the Basic Modeling Interface (BMI). It contains: Tools necessary for coupling models of disparate time and space scales (including grid mappers) Time-steppers that coordinate the sequencing of coupled models Exchange of data between BMI-enabled models Wrappers that automatically load BMI-enabled models into the PyMT framework Utilities that support open-source interfaces (UGRID, SGRID,CSDMS Standard Names, etc.) A collection of community-submitted models, written in a variety of programminglanguages, from a variety of process domains - but all usable from within the Python programming language A plug-in framework for adding additional BMI-enabled models to the framework In this presentation we intoduce the basics of the PyMT as well as provide an example of coupling models of different domains and grid types.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2017-04-01
Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.
Downscaling GISS ModelE Boreal Summer Climate over Africa
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.; Fulakeza, Matthew
2015-01-01
The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.
A tool for multi-scale modelling of the renal nephron
Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.
2011-01-01
We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210
An online model composition tool for system biology models
2013-01-01
Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914
A parsimonious dynamic model for river water quality assessment.
Mannina, Giorgio; Viviani, Gaspare
2010-01-01
Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.
The cost of simplifying air travel when modeling disease spread.
Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V
2009-01-01
Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. A. Wasiolek
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P
2011-01-01
To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Marín, Laura; Torrejón, Antonio; Oltra, Lorena; Seoane, Montserrat; Hernández-Sampelayo, Paloma; Vera, María Isabel; Casellas, Francesc; Alfaro, Noelia; Lázaro, Pablo; García-Sánchez, Valle
2011-06-01
Nurses play an important role in the multidisciplinary management of inflammatory bowel disease (IBD), but little is known about this role and the associated resources. To improve knowledge of resource availability for health care activities and the different organizational models in managing IBD in Spain. Cross-sectional study with data obtained by questionnaire directed at Spanish Gastroenterology Services (GS). Five GS models were identified according to whether they have: no specific service for IBD management (Model A); IBD outpatient office for physician consultations (Model B); general outpatient office for nurse consultations (Model C); both, Model B and Model C (Model D); and IBD Unit (Model E) when the hospital has a Comprehensive Care Unit for IBD with telephone helpline, computer, including a Model B. Available resources and activities performed were compared according to GS model (chi-square test and test for linear trend). Responses were received from 107 GS: 33 Model A (31%), 38 Model B (36%), 4 Model C (4%), 16 Model D (15%) and 16 Model E (15%). The model in which nurses have the most resources and responsibilities is the Model E. The more complete the organizational model, the more frequent the availability of nursing resources (educational material, databases, office, and specialized software) and responsibilities (management of walk-in appointments, provision of emotional support, health education, follow-up of drug treatment and treatment adherence) (p<0.05). Nurses have more resources and responsibilities the more complete is the organizational model for IBD management. Development of these areas may improve patient outcomes. Copyright © 2011 European Crohn's and Colitis Organisation. Published by Elsevier B.V. All rights reserved.
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.
2013-01-01
Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.
Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.
2008-01-01
The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.
Johnson, Leigh F; Geffen, Nathan
2016-03-01
Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.
NASA Astrophysics Data System (ADS)
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST
Winston, Richard B.
2009-01-01
ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.
Transient PVT measurements and model predictions for vessel heat transfer. Part II.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.
2010-07-01
Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models inmore » which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.« less
Comparison of chiller models for use in model-based fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya; Haves, Philip
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Geospace environment modeling 2008--2009 challenge: Dst index
Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.
2013-01-01
This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.
Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity
NASA Astrophysics Data System (ADS)
Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján
2017-06-01
It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Comparison of childbirth care models in public hospitals, Brazil.
Vogt, Sibylle Emilie; Silva, Kátia Silveira da; Dias, Marcos Augusto Bastos
2014-04-01
To compare collaborative and traditional childbirth care models. Cross-sectional study with 655 primiparous women in four public health system hospitals in Belo Horizonte, MG, Southeastern Brazil, in 2011 (333 women for the collaborative model and 322 for the traditional model, including those with induced or premature labor). Data were collected using interviews and medical records. The Chi-square test was used to compare the outcomes and multivariate logistic regression to determine the association between the model and the interventions used. Paid work and schooling showed significant differences in distribution between the models. Oxytocin (50.2% collaborative model and 65.5% traditional model; p < 0.001), amniotomy (54.3% collaborative model and 65.9% traditional model; p = 0.012) and episiotomy (collaborative model 16.1% and traditional model 85.2%; p < 0.001) were less used in the collaborative model with increased application of non-pharmacological pain relief (85.0% collaborative model and 78.9% traditional model; p = 0.042). The association between the collaborative model and the reduction in the use of oxytocin, artificial rupture of membranes and episiotomy remained after adjustment for confounding. The care model was not associated with complications in newborns or mothers neither with the use of spinal or epidural analgesia. The results suggest that collaborative model may reduce interventions performed in labor care with similar perinatal outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
Modeling approaches in avian conservation and the role of field biologists
Beissinger, Steven R.; Walters, J.R.; Catanzaro, D.G.; Smith, Kimberly G.; Dunning, J.B.; Haig, Susan M.; Noon, Barry; Stith, Bradley M.
2006-01-01
This review grew out of our realization that models play an increasingly important role in conservation but are rarely used in the research of most avian biologists. Modelers are creating models that are more complex and mechanistic and that can incorporate more of the knowledge acquired by field biologists. Such models require field biologists to provide more specific information, larger sample sizes, and sometimes new kinds of data, such as habitat-specific demography and dispersal information. Field biologists need to support model development by testing key model assumptions and validating models. The best conservation decisions will occur where cooperative interaction enables field biologists, modelers, statisticians, and managers to contribute effectively. We begin by discussing the general form of ecological models—heuristic or mechanistic, "scientific" or statistical—and then highlight the structure, strengths, weaknesses, and applications of six types of models commonly used in avian conservation: (1) deterministic single-population matrix models, (2) stochastic population viability analysis (PVA) models for single populations, (3) metapopulation models, (4) spatially explicit models, (5) genetic models, and (6) species distribution models. We end by considering their unique attributes, determining whether the assumptions that underlie the structure are valid, and testing the ability of the model to predict the future correctly.
NASA Astrophysics Data System (ADS)
Rossman, Nathan R.; Zlotnik, Vitaly A.
2013-09-01
Water resources in agriculture-dominated basins of the arid western United States are stressed due to long-term impacts from pumping. A review of 88 regional groundwater-flow modeling applications from seven intensively irrigated western states (Arizona, California, Colorado, Idaho, Kansas, Nebraska and Texas) was conducted to provide hydrogeologists, modelers, water managers, and decision makers insight about past modeling studies that will aid future model development. Groundwater models were classified into three types: resource evaluation models (39 %), which quantify water budgets and act as preliminary models intended to be updated later, or constitute re-calibrations of older models; management/planning models (55 %), used to explore and identify management plans based on the response of the groundwater system to water-development or climate scenarios, sometimes under water-use constraints; and water rights models (7 %), used to make water administration decisions based on model output and to quantify water shortages incurred by water users or climate changes. Results for 27 model characteristics are summarized by state and model type, and important comparisons and contrasts are highlighted. Consideration of modeling uncertainty and the management focus toward sustainability, adaptive management and resilience are discussed, and future modeling recommendations, in light of the reviewed models and other published works, are presented.
Roelker, Sarah A; Caruthers, Elena J; Baker, Rachel K; Pelz, Nicholas C; Chaudhari, Ajit M W; Siston, Robert A
2017-11-01
With more than 29,000 OpenSim users, several musculoskeletal models with varying levels of complexity are available to study human gait. However, how different model parameters affect estimated joint and muscle function between models is not fully understood. The purpose of this study is to determine the effects of four OpenSim models (Gait2392, Lower Limb Model 2010, Full-Body OpenSim Model, and Full Body Model 2016) on gait mechanics and estimates of muscle forces and activations. Using OpenSim 3.1 and the same experimental data for all models, six young adults were scaled in each model, gait kinematics were reproduced, and static optimization estimated muscle function. Simulated measures differed between models by up to 6.5° knee range of motion, 0.012 Nm/Nm peak knee flexion moment, 0.49 peak rectus femoris activation, and 462 N peak rectus femoris force. Differences in coordinate system definitions between models altered joint kinematics, influencing joint moments. Muscle parameter and joint moment discrepancies altered muscle activations and forces. Additional model complexity yielded greater error between experimental and simulated measures; therefore, this study suggests Gait2392 is a sufficient model for studying walking in healthy young adults. Future research is needed to determine which model(s) is best for tasks with more complex motion.
Inter-sectoral comparison of model uncertainty of climate change impacts in Africa
NASA Astrophysics Data System (ADS)
van Griensven, Ann; Vetter, Tobias; Piontek, Franzisca; Gosling, Simon N.; Kamali, Bahareh; Reinhardt, Julia; Dinkneh, Aklilu; Yang, Hong; Alemayehu, Tadesse
2016-04-01
We present the model results and their uncertainties of an inter-sectoral impact model inter-comparison initiative (ISI-MIP) for climate change impacts in Africa. The study includes results on hydrological, crop and health aspects. The impact models used ensemble inputs consisting of 20 time series of daily rainfall and temperature data obtained from 5 Global Circulation Models (GCMs) and 4 Representative concentration pathway (RCP). In this study, we analysed model uncertainty for the Regional Hydrological Models, Global Hydrological Models, Malaria models and Crop models. For the regional hydrological models, we used 2 African test cases: the Blue Nile in Eastern Africa and the Niger in Western Africa. For both basins, the main sources of uncertainty are originating from the GCM and RCPs, while the uncertainty of the regional hydrological models is relatively low. The hydrological model uncertainty becomes more important when predicting changes on low flows compared to mean or high flows. For the other sectors, the impact models have the largest share of uncertainty compared to GCM and RCP, especially for Malaria and crop modelling. The overall conclusion of the ISI-MIP is that it is strongly advised to use ensemble modeling approach for climate change impact studies throughout the whole modelling chain.
Extended behavioural modelling of FET and lattice-mismatched HEMT devices
NASA Astrophysics Data System (ADS)
Khawam, Yahya; Albasha, Lutfi
2017-07-01
This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.
The regionalization of national-scale SPARROW models for stream nutrients
Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.
Modeling of Stiffness and Strength of Bone at Nanoscale.
Abueidda, Diab W; Sabet, Fereshteh A; Jasiuk, Iwona M
2017-05-01
Two distinct geometrical models of bone at the nanoscale (collagen fibril and mineral platelets) are analyzed computationally. In the first model (model I), minerals are periodically distributed in a staggered manner in a collagen matrix while in the second model (model II), minerals form continuous layers outside the collagen fibril. Elastic modulus and strength of bone at the nanoscale, represented by these two models under longitudinal tensile loading, are studied using a finite element (FE) software abaqus. The analysis employs a traction-separation law (cohesive surface modeling) at various interfaces in the models to account for interfacial delaminations. Plane stress, plane strain, and axisymmetric versions of the two models are considered. Model II is found to have a higher stiffness than model I for all cases. For strength, the two models alternate the superiority of performance depending on the inputs and assumptions used. For model II, the axisymmetric case gives higher results than the plane stress and plane strain cases while an opposite trend is observed for model I. For axisymmetric case, model II shows greater strength and stiffness compared to model I. The collagen-mineral arrangement of bone at nanoscale forms a basic building block of bone. Thus, knowledge of its mechanical properties is of high scientific and clinical interests.
The Use of Behavior Models for Predicting Complex Operations
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2010-01-01
Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.
ERIC Educational Resources Information Center
Gerst, Elyssa H.
2017-01-01
The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…
ERIC Educational Resources Information Center
Shin, Tacksoo
2012-01-01
This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…
ERIC Educational Resources Information Center
Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.
2011-01-01
The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
A toolbox and a record for scientific model development
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.
A decision support model for investment on P2P lending platform.
Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.
A decision support model for investment on P2P lending platform
Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed
2016-08-01
This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
Documenting Models for Interoperability and Reusability ...
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration between scientific communities, since component-based modeling can integrate models from different disciplines. Integrated Environmental Modeling (IEM) systems focus on transferring information between components by capturing a conceptual site model; establishing local metadata standards for input/output of models and databases; managing data flow between models and throughout the system; facilitating quality control of data exchanges (e.g., checking units, unit conversions, transfers between software languages); warning and error handling; and coordinating sensitivity/uncertainty analyses. Although many computational software systems facilitate communication between, and execution of, components, there are no common approaches, protocols, or standards for turn-key linkages between software systems and models, especially if modifying components is not the intent. Using a standard ontology, this paper reviews how models can be described for discovery, understanding, evaluation, access, and implementation to facilitate interoperability and reusability. In the proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Mod
CSR Model Implementation from School Stakeholder Perspectives
ERIC Educational Resources Information Center
Herrmann, Suzannah
2006-01-01
Despite comprehensive school reform (CSR) model developers' best intentions to make school stakeholders adhere strictly to the implementation of model components, school stakeholders implementing CSR models inevitably make adaptations to the CSR model. Adaptations are made to CSR models because school stakeholders internalize CSR model practices…
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
[Bone remodeling and modeling/mini-modeling.
Hasegawa, Tomoka; Amizuka, Norio
Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.
An Introduction to Markov Modeling: Concepts and Uses
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Lau, Sonie (Technical Monitor)
1998-01-01
Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.
The cerebro-cerebellum: Could it be loci of forward models?
Ishikawa, Takahiro; Tomatsu, Saeka; Izawa, Jun; Kakei, Shinji
2016-03-01
It is widely accepted that the cerebellum acquires and maintain internal models for motor control. An internal model simulates mapping between a set of causes and effects. There are two candidates of cerebellar internal models, forward models and inverse models. A forward model transforms a motor command into a prediction of the sensory consequences of a movement. In contrast, an inverse model inverts the information flow of the forward model. Despite the clearly different formulations of the two internal models, it is still controversial whether the cerebro-cerebellum, the phylogenetically newer part of the cerebellum, provides inverse models or forward models for voluntary limb movements or other higher brain functions. In this article, we review physiological and morphological evidence that suggests the existence in the cerebro-cerebellum of a forward model for limb movement. We will also discuss how the characteristic input-output organization of the cerebro-cerebellum may contribute to forward models for non-motor higher brain functions. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Mechanical model development of rolling bearing-rotor systems: A review
NASA Astrophysics Data System (ADS)
Cao, Hongrui; Niu, Linkai; Xi, Songtao; Chen, Xuefeng
2018-03-01
The rolling bearing rotor (RBR) system is the kernel of many rotating machines, which affects the performance of the whole machine. Over the past decades, extensive research work has been carried out to investigate the dynamic behavior of RBR systems. However, to the best of the authors' knowledge, no comprehensive review on RBR modelling has been reported yet. To address this gap in the literature, this paper reviews and critically discusses the current progress of mechanical model development of RBR systems, and identifies future trends for research. Firstly, five kinds of rolling bearing models, i.e., the lumped-parameter model, the quasi-static model, the quasi-dynamic model, the dynamic model, and the finite element (FE) model are summarized. Then, the coupled modelling between bearing models and various rotor models including De Laval/Jeffcott rotor, rigid rotor, transfer matrix method (TMM) models and FE models are presented. Finally, the paper discusses the key challenges of previous works and provides new insights into understanding of RBR systems for their advanced future engineering applications.
NASA Astrophysics Data System (ADS)
Gouvea, Julia; Passmore, Cynthia
2017-03-01
The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.
Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread
Miller, Joel C.; Volz, Erik M.
2012-01-01
We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...
2017-07-11
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Modeling of near-wall turbulence
NASA Technical Reports Server (NTRS)
Shih, T. H.; Mansour, N. N.
1990-01-01
An improved k-epsilon model and a second order closure model is presented for low Reynolds number turbulence near a wall. For the k-epsilon model, a modified form of the eddy viscosity having correct asymptotic near wall behavior is suggested, and a model for the pressure diffusion term in the turbulent kinetic energy equation is proposed. For the second order closure model, the existing models are modified for the Reynolds stress equations to have proper near wall behavior. A dissipation rate equation for the turbulent kinetic energy is also reformulated. The proposed models satisfy realizability and will not produce unphysical behavior. Fully developed channel flows are used for model testing. The calculations are compared with direct numerical simulations. It is shown that the present models, both the k-epsilon model and the second order closure model, perform well in predicting the behavior of the near wall turbulence. Significant improvements over previous models are obtained.
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
NASA Astrophysics Data System (ADS)
Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.
2013-05-01
This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
[Model-based biofuels system analysis: a review].
Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin
2011-03-01
Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.
An Immuno-epidemiological Model of Paratuberculosis
NASA Astrophysics Data System (ADS)
Martcheva, M.
2011-11-01
The primary objective of this article is to introduce an immuno-epidemiological model of paratuberculosis (Johne's disease). To develop the immuno-epidemiological model, we first develop an immunological model and an epidemiological model. Then, we link the two models through time-since-infection structure and parameters of the epidemiological model. We use the nested approach to compose the immuno-epidemiological model. Our immunological model captures the switch between the T-cell immune response and the antibody response in Johne's disease. The epidemiological model is a time-since-infection model and captures the variability of transmission rate and the vertical transmission of the disease. We compute the immune-response-dependent epidemiological reproduction number. Our immuno-epidemiological model can be used for investigation of the impact of the immune response on the epidemiology of Johne's disease.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
FacetModeller: Software for manual creation, manipulation and analysis of 3D surface-based models
NASA Astrophysics Data System (ADS)
Lelièvre, Peter G.; Carter-McAuslan, Angela E.; Dunham, Michael W.; Jones, Drew J.; Nalepa, Mariella; Squires, Chelsea L.; Tycholiz, Cassandra J.; Vallée, Marc A.; Farquharson, Colin G.
2018-01-01
The creation of 3D models is commonplace in many disciplines. Models are often built from a collection of tessellated surfaces. To apply numerical methods to such models it is often necessary to generate a mesh of space-filling elements that conforms to the model surfaces. While there are meshing algorithms that can do so, they place restrictive requirements on the surface-based models that are rarely met by existing 3D model building software. Hence, we have developed a Java application named FacetModeller, designed for efficient manual creation, modification and analysis of 3D surface-based models destined for use in numerical modelling.
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Application of surface complexation models to anion adsorption by natural materials
USDA-ARS?s Scientific Manuscript database
Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...
Space Environments and Effects: Trapped Proton Model
NASA Technical Reports Server (NTRS)
Huston, S. L.; Kauffman, W. (Technical Monitor)
2002-01-01
An improved model of the Earth's trapped proton environment has been developed. This model, designated Trapped Proton Model version 1 (TPM-1), determines the omnidirectional flux of protons with energy between 1 and 100 MeV throughout near-Earth space. The model also incorporates a true solar cycle dependence. The model consists of several data files and computer software to read them. There are three versions of the mo'del: a FORTRAN-Callable library, a stand-alone model, and a Web-based model.
The NASA Marshall engineering thermosphere model
NASA Technical Reports Server (NTRS)
Hickey, Michael Philip
1988-01-01
Described is the NASA Marshall Engineering Thermosphere (MET) Model, which is a modified version of the MFSC/J70 Orbital Atmospheric Density Model as currently used in the J70MM program at MSFC. The modifications to the MFSC/J70 model required for the MET model are described, graphical and numerical examples of the models are included, as is a listing of the MET model computer program. Major differences between the numerical output from the MET model and the MFSC/J70 model are discussed.
Wind turbine model and loop shaping controller design
NASA Astrophysics Data System (ADS)
Gilev, Bogdan
2017-12-01
A model of a wind turbine is evaluated, consisting of: wind speed model, mechanical and electrical model of generator and tower oscillation model. Model of the whole system is linearized around of a nominal point. By using the linear model with uncertainties is synthesized a uncertain model. By using the uncertain model is developed a H∞ controller, which provide mode of stabilizing the rotor frequency and damping the tower oscillations. Finally is simulated work of nonlinear system and H∞ controller.
Simulated Students and Classroom Use of Model-Based Intelligent Tutoring
NASA Technical Reports Server (NTRS)
Koedinger, Kenneth R.
2008-01-01
Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.