Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2016-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology, size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and small…
Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2017-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology and size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and…
Large-Scale Assessments and Educational Policies in Italy
ERIC Educational Resources Information Center
Damiani, Valeria
2016-01-01
Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…
Energy transfers in large-scale and small-scale dynamos
NASA Astrophysics Data System (ADS)
Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra
2015-11-01
We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-17
... Integrated Circuit Semiconductor Chips and Products Containing the Same; Notice of a Commission Determination... certain large scale integrated circuit semiconductor chips and products containing same by reason of... existence of a domestic industry. The Commission's notice of investigation named several respondents...
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-05
... Integrated Circuit Semiconductor Chips and Products Containing Same; Notice of Investigation AGENCY: U.S... of certain large scale integrated circuit semiconductor chips and products containing same by reason... alleges that an industry in the United States exists as required by subsection (a)(2) of section 337. The...
ERIC Educational Resources Information Center
Steiner-Khamsi, Gita; Appleton, Margaret; Vellani, Shezleen
2018-01-01
The media analysis is situated in the larger body of studies that explore the varied reasons why different policy actors advocate for international large-scale student assessments (ILSAs) and adds to the research on the fast advance of the global education industry. The analysis of "The Economist," "Financial Times," and…
Penders, Bart; Vos, Rein; Horstman, Klasien
2009-11-01
Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.
ERIC Educational Resources Information Center
Grotzer, Tina A.; Solis, S. Lynneth; Tutwiler, M. Shane; Cuzzolino, Megan Powell
2017-01-01
Understanding complex systems requires reasoning about causal relationships that behave or appear to behave probabilistically. Features such as distributed agency, large spatial scales, and time delays obscure co-variation relationships and complex interactions can result in non-deterministic relationships between causes and effects that are best…
ERIC Educational Resources Information Center
Tsiouris, John A.; Kim, Soh-Yule; Brown, W. Ted; Pettinger, Jill; Cohen, Ira L.
2013-01-01
The use of psychotropics by categories and the reason for their prescription was investigated in a large scale study of 4,069 adults with ID, including those with autism spectrum disorder, in New York State. Similar to other studies it was found that 58 % (2,361/4,069) received one or more psychotropics. Six percent received typical, 6 % received…
Skyscape Archaeology: an emerging interdiscipline for archaeoastronomers and archaeologists
NASA Astrophysics Data System (ADS)
Henty, Liz
2016-02-01
For historical reasons archaeoastronomy and archaeology differ in their approach to prehistoric monuments and this has created a divide between the disciplines which adopt seemingly incompatible methodologies. The reasons behind the impasse will be explored to show how these different approaches gave rise to their respective methods. Archaeology investigations tend to concentrate on single site analysis whereas archaeoastronomical surveys tend to be data driven from the examination of a large number of similar sets. A comparison will be made between traditional archaeoastronomical data gathering and an emerging methodology which looks at sites on a small scale and combines archaeology and astronomy. Silva's recent research in Portugal and this author's survey in Scotland have explored this methodology and termed it skyscape archaeology. This paper argues that this type of phenomenological skyscape archaeology offers an alternative to large scale statistical studies which analyse astronomical data obtained from a large number of superficially similar archaeological sites.
ERIC Educational Resources Information Center
Ding, Lin; Wei, Xin; Liu, Xiufeng
2016-01-01
This study investigates three aspects--university major, year, and institution type--in relation to student scientific reasoning. Students from three majors (science, engineering, and education), four year levels (years 1 through 4), and two tiers of Chinese universities (tiers 1 and 2) participated in the study. A large-scale written assessment…
Male group size, female distribution and changes in sexual segregation by Roosevelt elk
Peterson, Leah M.
2017-01-01
Sexual segregation, or the differential use of space by males and females, is hypothesized to be a function of body size dimorphism. Sexual segregation can also manifest at small (social segregation) and large (habitat segregation) spatial scales for a variety of reasons. Furthermore, the connection between small- and large-scale sexual segregation has rarely been addressed. We studied a population of Roosevelt elk (Cervus elaphus roosevelti) across 21 years in north coastal California, USA, to assess small- and large-scale sexual segregation in winter. We hypothesized that male group size would associate with small-scale segregation and that a change in female distribution would associate with large-scale segregation. Variation in forage biomass might also be coupled to small and large-scale sexual segregation. Our findings were consistent with male group size associating with small-scale segregation and a change in female distribution associating with large-scale segregation. Females appeared to avoid large groups comprised of socially dominant males. Males appeared to occupy a habitat vacated by females because of a wider forage niche, greater tolerance to lethal risks, and, perhaps, to reduce encounters with other elk. Sexual segregation at both spatial scales was a poor predictor of forage biomass. Size dimorphism was coupled to change in sexual segregation at small and large spatial scales. Small scale segregation can seemingly manifest when all forage habitat is occupied by females and large scale segregation might happen when some forage habitat is not occupied by females. PMID:29121076
ERIC Educational Resources Information Center
Hodkowski, Nicola M.; Gardner, Amber; Jorgensen, Cody; Hornbein, Peter; Johnson, Heather L.; Tzur, Ron
2016-01-01
In this paper we examine the application of Tzur's (2007) fine-grained assessment to the design of an assessment measure of a particular multiplicative scheme so that non-interview, good enough data can be obtained (on a large scale) to infer into elementary students' reasoning. We outline three design principles that surfaced through our recent…
Schulz, S; Romacker, M; Hahn, U
1998-01-01
The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics.
Schulz, S.; Romacker, M.; Hahn, U.
1998-01-01
The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics. Images Figure 3 PMID:9929335
The Quality of Textbooks: A Basis for European Collaboration.
ERIC Educational Resources Information Center
Hooghoff, Hans
The enhancement of the European dimension in the national curriculum is a large scale educational innovation that affects many European countries. This report puts forward the proposition that broad scale educational innovation has more success if the aims and objectives find their way into textbooks. The reasons why a European dimension in…
Map Scale, Proportion, and Google[TM] Earth
ERIC Educational Resources Information Center
Roberge, Martin C.; Cooper, Linda L.
2010-01-01
Aerial imagery has a great capacity to engage and maintain student interest while providing a contextual setting to strengthen their ability to reason proportionally. Free, on-demand, high-resolution, large-scale aerial photography provides both a bird's eye view of the world and a new perspective on one's own community. This article presents an…
III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.
Davis-Kean, Pamela E; Jager, Justin
2017-06-01
For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.
Bifurcations in models of a society of reasonable contrarians and conformists
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Rechtman, Raúl
2015-10-01
We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.
Bifurcations in models of a society of reasonable contrarians and conformists.
Bagnoli, Franco; Rechtman, Raúl
2015-10-01
We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.
Early executive function predicts reasoning development.
Richland, Lindsey E; Burchinal, Margaret R
2013-01-01
Analogical reasoning is a core cognitive skill that distinguishes humans from all other species and contributes to general fluid intelligence, creativity, and adaptive learning capacities. Yet its origins are not well understood. In the study reported here, we analyzed large-scale longitudinal data from the Study of Early Child Care and Youth Development to test predictors of growth in analogical-reasoning skill from third grade to adolescence. Our results suggest an integrative resolution to the theoretical debate regarding contributory factors arising from smaller-scale, cross-sectional experiments on analogy development. Children with greater executive-function skills (both composite and inhibitory control) and vocabulary knowledge in early elementary school displayed higher scores on a verbal analogies task at age 15 years, even after adjusting for key covariates. We posit that knowledge is a prerequisite to analogy performance, but strong executive-functioning resources during early childhood are related to long-term gains in fundamental reasoning skills.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, Alan J.
2016-04-29
While the stated reason for asking this question is “to understand better our ability to warn policy makers in the unlikely event of an unanticipated SRM geoengineering deployment or large-scale field experiment”, my colleagues and I felt that motives would be important context because the scale of any meaningful SRM deployment would be so large that covert deployment seems impossible. However, several motives emerged that suggest a less-than-global effort might be important.
Commentary: Environmental nanophotonics and energy
NASA Astrophysics Data System (ADS)
Smith, Geoff B.
2011-01-01
The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.
A new energy transfer model for turbulent free shear flow
NASA Technical Reports Server (NTRS)
Liou, William W.-W.
1992-01-01
A new model for the energy transfer mechanism in the large-scale turbulent kinetic energy equation is proposed. An estimate of the characteristic length scale of the energy containing large structures is obtained from the wavelength associated with the structures predicted by a weakly nonlinear analysis for turbulent free shear flows. With the inclusion of the proposed energy transfer model, the weakly nonlinear wave models for the turbulent large-scale structures are self-contained and are likely to be independent flow geometries. The model is tested against a plane mixing layer. Reasonably good agreement is achieved. Finally, it is shown by using the Liapunov function method, the balance between the production and the drainage of the kinetic energy of the turbulent large-scale structures is asymptotically stable as their amplitude saturates. The saturation of the wave amplitude provides an alternative indicator for flow self-similarity.
ERIC Educational Resources Information Center
Smith, Carol L.; Wiser, Marianne; Anderson, Charles W.; Krajcik, Joseph
2006-01-01
The purpose of this article is to suggest ways of using research on children's reasoning and learning to elaborate on existing national standards and to improve large-scale and classroom assessments. The authors suggest that "learning progressions"--descriptions of successively more sophisticated ways of reasoning within a content domain based on…
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
KA-SB: from data integration to large scale reasoning
Roldán-García, María del Mar; Navas-Delgado, Ismael; Kerzazi, Amine; Chniber, Othmane; Molina-Castro, Joaquín; Aldana-Montes, José F
2009-01-01
Background The analysis of information in the biological domain is usually focused on the analysis of data from single on-line data sources. Unfortunately, studying a biological process requires having access to disperse, heterogeneous, autonomous data sources. In this context, an analysis of the information is not possible without the integration of such data. Methods KA-SB is a querying and analysis system for final users based on combining a data integration solution with a reasoner. Thus, the tool has been created with a process divided into two steps: 1) KOMF, the Khaos Ontology-based Mediator Framework, is used to retrieve information from heterogeneous and distributed databases; 2) the integrated information is crystallized in a (persistent and high performance) reasoner (DBOWL). This information could be further analyzed later (by means of querying and reasoning). Results In this paper we present a novel system that combines the use of a mediation system with the reasoning capabilities of a large scale reasoner to provide a way of finding new knowledge and of analyzing the integrated information from different databases, which is retrieved as a set of ontology instances. This tool uses a graphical query interface to build user queries easily, which shows a graphical representation of the ontology and allows users o build queries by clicking on the ontology concepts. Conclusion These kinds of systems (based on KOMF) will provide users with very large amounts of information (interpreted as ontology instances once retrieved), which cannot be managed using traditional main memory-based reasoners. We propose a process for creating persistent and scalable knowledgebases from sets of OWL instances obtained by integrating heterogeneous data sources with KOMF. This process has been applied to develop a demo tool , which uses the BioPax Level 3 ontology as the integration schema, and integrates UNIPROT, KEGG, CHEBI, BRENDA and SABIORK databases. PMID:19796402
Are Psychotic Experiences Related to Poorer Reflective Reasoning?
Mækelæ, Martin J.; Moritz, Steffen; Pfuhl, Gerit
2018-01-01
Background: Cognitive biases play an important role in the formation and maintenance of delusions. These biases are indicators of a weak reflective mind, or reduced engaging in reflective and deliberate reasoning. In three experiments, we tested whether a bias to accept non-sense statements as profound, treat metaphorical statements as literal, and suppress intuitive responses is related to psychotic-like experiences. Methods: We tested deliberate reasoning and psychotic-like experiences in the general population and in patients with a former psychotic episode. Deliberate reasoning was assessed with the bullshit receptivity scale, the ontological confabulation scale and the cognitive reflection test (CRT). We also measured algorithmic performance with the Berlin numeracy test and the wordsum test. Psychotic-like experiences were measured with the Community Assessment of Psychic Experience (CAPE-42) scale. Results: Psychotic-like experiences were positively correlated with a larger receptivity toward bullshit, more ontological confabulations, and also a lower score on the CRT but not with algorithmic task performance. In the patient group higher psychotic-like experiences significantly correlated with higher bullshit receptivity. Conclusion: Reduced deliberate reasoning may contribute to the formation of delusions, and be a general thinking bias largely independent of a person's general intelligence. Acceptance of bullshit may be facilitated the more positive symptoms a patient has, contributing to the maintenance of the delusions. PMID:29483886
The Classroom Sandbox: A Physical Model for Scientific Inquiry
ERIC Educational Resources Information Center
Feldman, Allan; Cooke, Michele L.; Ellsworth, Mary S.
2010-01-01
For scientists, the sandbox serves as an analog for faulting in Earth's crust. Here, the large, slow processes within the crust can be scaled to the size of a table, and time scales are directly observable. This makes it a useful tool for demonstrating the role of inquiry in science. For this reason, the sandbox is also helpful for learning…
Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel
NASA Technical Reports Server (NTRS)
Wang, John T.; Lotts, Christine G.; Sleight, David W.
1999-01-01
This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
NASA Astrophysics Data System (ADS)
Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.
2015-12-01
Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are being simulated for plausible physical reasons, boosting confidence in future projections of temperature extremes. Conversely, where model skill is identified to be lower, caution should be exercised in interpreting future projections.
Energy spectrum of tearing mode turbulence in sheared background field
NASA Astrophysics Data System (ADS)
Hu, Di; Bhattacharjee, Amitava; Huang, Yi-Min
2018-06-01
The energy spectrum of tearing mode turbulence in a sheared background magnetic field is studied in this work. We consider the scenario where the nonlinear interaction of overlapping large-scale modes excites a broad spectrum of small-scale modes, generating tearing mode turbulence. The spectrum of such turbulence is of interest since it is relevant to the small-scale back-reaction on the large-scale field. The turbulence we discuss here differs from traditional MHD turbulence mainly in two aspects. One is the existence of many linearly stable small-scale modes which cause an effective damping during the energy cascade. The other is the scale-independent anisotropy induced by the large-scale modes tilting the sheared background field, as opposed to the scale-dependent anisotropy frequently encountered in traditional critically balanced turbulence theories. Due to these two differences, the energy spectrum deviates from a simple power law and takes the form of a power law multiplied by an exponential falloff. Numerical simulations are carried out using visco-resistive MHD equations to verify our theoretical predictions, and a reasonable agreement is found between the numerical results and our model.
Defaults, context, and knowledge: alternatives for OWL-indexed knowledge bases.
Rector, A
2004-01-01
The new Web Ontology Language (OWL) and its Description Logic compatible sublanguage (OWL-DL) explicitly exclude defaults and exceptions, as do all logic based formalisms for ontologies. However, many biomedical applications appear to require default reasoning, at least if they are to be engineered in a maintainable way. Default reasoning has always been one of the great strengths of Frame systems such as Protégé. Resolving this conflict requires analysis of the different uses for defaults and exceptions. In some cases, alternatives can be provided within the OWL framework; in others, it appears that hybrid reasoning about a knowledge base of contingent facts built around the core ontology is necessary. Trade-offs include both human factors and the scaling of computational performance. The analysis presented here is based on the OpenGALEN experience with large scale ontologies using a formalism, GRAIL, which explicitly incorporates constructs for hybrid reasoning, numerous experiments with OWL, and initial work on combining OWL and Protégé.
A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet
ERIC Educational Resources Information Center
Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.
2006-01-01
Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…
NASA Astrophysics Data System (ADS)
Thacker, Beth
2017-01-01
Large-scale assessment data from Texas Tech University yielded evidence that most students taught traditionally in large lecture classes with online homework and predominantly multiple choice question exams, when asked to answer free-response (FR) questions, did not support their answers with logical arguments grounded in physics concepts. In addition to a lack of conceptual understanding, incorrect and partially correct answers lacked evidence of the ability to apply even lower level reasoning skills in order to solve a problem. Correct answers, however, did show evidence of at least lower level thinking skills as coded using a rubric based on Bloom's taxonomy. With the introduction of evidence-based instruction into the labs and recitations of the large courses and in a small, completely laboratory-based, hands-on course, the percentage of correct answers with correct explanations increased. The FR format, unlike other assessment formats, allowed assessment of both conceptual understanding and the application of thinking skills, clearly pointing out weaknesses not revealed by other assessment instruments, and providing data on skills beyond conceptual understanding for course and program assessment. Supported by National Institutes of Health (NIH) Challenge grant #1RC1GM090897-01.
Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael
2003-01-01
We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.
Is the negative IOD during 2016 the reason for monsoon failure over southwest peninsular India?
NASA Astrophysics Data System (ADS)
Sreelekha, P. N.; Babu, C. A.
2018-01-01
The study investigates the mechanism responsible for the deficit rainfall over southwest peninsular India during the 2016 monsoon season. Analysis shows that the large-scale variation in circulation pattern due to the strong, negative Indian Ocean Dipole phenomenon was the reason for the deficit rainfall. Significant reduction in the number of northward-propagating monsoon-organized convections together with fast propagation over the southwest peninsular India resulted in reduction in rainfall. On the other hand, their persistence for longer time over the central part of India resulted in normal rainfall. It was found that the strong convection over the eastern equatorial Indian Ocean creates strong convergence over that region. The combined effect of the sinking due to the well-developed Walker circulation originated over the eastern equatorial Indian Ocean and the descending limb of the monsoon Hadley cell caused strong subsidence over the western equatorial Indian Ocean. The tail of this large-scale sinking extended up to the southern parts of India. This hinders formation of monsoon-organized convections leading to a large deficiency of rainfall during monsoon 2016 over the southwest peninsular India.
The salience network causally influences default mode network activity during moral reasoning
Wilson, Stephen M.; D’Esposito, Mark; Kayser, Andrew S.; Grossman, Scott N.; Poorzand, Pardis; Seeley, William W.; Miller, Bruce L.; Rankin, Katherine P.
2013-01-01
Large-scale brain networks are integral to the coordination of human behaviour, and their anatomy provides insights into the clinical presentation and progression of neurodegenerative illnesses such as Alzheimer’s disease, which targets the default mode network, and behavioural variant frontotemporal dementia, which targets a more anterior salience network. Although the default mode network is recruited when healthy subjects deliberate about ‘personal’ moral dilemmas, patients with Alzheimer’s disease give normal responses to these dilemmas whereas patients with behavioural variant frontotemporal dementia give abnormal responses to these dilemmas. We hypothesized that this apparent discrepancy between activation- and patient-based studies of moral reasoning might reflect a modulatory role for the salience network in regulating default mode network activation. Using functional magnetic resonance imaging to characterize network activity of patients with behavioural variant frontotemporal dementia and healthy control subjects, we present four converging lines of evidence supporting a causal influence from the salience network to the default mode network during moral reasoning. First, as previously reported, the default mode network is recruited when healthy subjects deliberate about ‘personal’ moral dilemmas, but patients with behavioural variant frontotemporal dementia producing atrophy in the salience network give abnormally utilitarian responses to these dilemmas. Second, patients with behavioural variant frontotemporal dementia have reduced recruitment of the default mode network compared with healthy control subjects when deliberating about these dilemmas. Third, a Granger causality analysis of functional neuroimaging data from healthy control subjects demonstrates directed functional connectivity from nodes of the salience network to nodes of the default mode network during moral reasoning. Fourth, this Granger causal influence is diminished in patients with behavioural variant frontotemporal dementia. These findings are consistent with a broader model in which the salience network modulates the activity of other large-scale networks, and suggest a revision to a previously proposed ‘dual-process’ account of moral reasoning. These findings also characterize network interactions underlying abnormal moral reasoning in frontotemporal dementia, which may serve as a model for the aberrant judgement and interpersonal behaviour observed in this disease and in other disorders of social function. More broadly, these findings link recent work on the dynamic interrelationships between large-scale brain networks to observable impairments in dementia syndromes, which may shed light on how diseases that target one network also alter the function of interrelated networks. PMID:23576128
Daza, Juan D.; Köhler, Jörn; Vences, Miguel; Glaw, Frank
2017-01-01
The gecko genus Geckolepis, endemic to Madagascar and the Comoro archipelago, is taxonomically challenging. One reason is its members ability to autotomize a large portion of their scales when grasped or touched, most likely to escape predation. Based on an integrative taxonomic approach including external morphology, morphometrics, genetics, pholidosis, and osteology, we here describe the first new species from this genus in 75 years: Geckolepis megalepis sp. nov. from the limestone karst of Ankarana in northern Madagascar. The new species has the largest known body scales of any gecko (both relatively and absolutely), which come off with exceptional ease. We provide a detailed description of the skeleton of the genus Geckolepis based on micro-Computed Tomography (micro-CT) analysis of the new species, the holotype of G. maculata, the recently resurrected G. humbloti, and a specimen belonging to an operational taxonomic unit (OTU) recently suggested to represent G. maculata. Geckolepis is characterized by highly mineralized, imbricated scales, paired frontals, and unfused subolfactory processes of the frontals, among other features. We identify diagnostic characters in the osteology of these geckos that help define our new species and show that the OTU assigned to G. maculata is probably not conspecific with it, leaving the taxonomic identity of this species unclear. We discuss possible reasons for the extremely enlarged scales of G. megalepis in the context of an anti-predator defence mechanism, and the future of Geckolepis taxonomy. PMID:28194313
NASA Technical Reports Server (NTRS)
Stevens, Joseph E.
1955-01-01
Free-flight tests of two rocket-propelled l/20-scale models of the Bell MX-776 missile have been conducted to obtain measurements of the aileron deflection required to counteract the induced rolling moments caused by combined angles of attack and sideslip and thus to determine whether the ailerons provided were capable of controlling the model at the attitudes produced by the test conditions. Inability to obtain reasonably steady-state conditions and superimposed high-frequency oscillations in the data precluded any detailed analysis of the results obtained from the tests. For these reasons, the data presented are limited largely to qualitative results.
Using AberOWL for fast and scalable reasoning over BioPortal ontologies.
Slater, Luke; Gkoutos, Georgios V; Schofield, Paul N; Hoehndorf, Robert
2016-08-08
Reasoning over biomedical ontologies using their OWL semantics has traditionally been a challenging task due to the high theoretical complexity of OWL-based automated reasoning. As a consequence, ontology repositories, as well as most other tools utilizing ontologies, either provide access to ontologies without use of automated reasoning, or limit the number of ontologies for which automated reasoning-based access is provided. We apply the AberOWL infrastructure to provide automated reasoning-based access to all accessible and consistent ontologies in BioPortal (368 ontologies). We perform an extensive performance evaluation to determine query times, both for queries of different complexity and for queries that are performed in parallel over the ontologies. We demonstrate that, with the exception of a few ontologies, even complex and parallel queries can now be answered in milliseconds, therefore allowing automated reasoning to be used on a large scale, to run in parallel, and with rapid response times.
NASA Astrophysics Data System (ADS)
Sanchez-Gomez, Emilia; Somot, S.; Déqué, M.
2009-10-01
One of the main concerns in regional climate modeling is to which extent limited-area regional climate models (RCM) reproduce the large-scale atmospheric conditions of their driving general circulation model (GCM). In this work we investigate the ability of a multi-model ensemble of regional climate simulations to reproduce the large-scale weather regimes of the driving conditions. The ensemble consists of a set of 13 RCMs on a European domain, driven at their lateral boundaries by the ERA40 reanalysis for the time period 1961-2000. Two sets of experiments have been completed with horizontal resolutions of 50 and 25 km, respectively. The spectral nudging technique has been applied to one of the models within the ensemble. The RCMs reproduce the weather regimes behavior in terms of composite pattern, mean frequency of occurrence and persistence reasonably well. The models also simulate well the long-term trends and the inter-annual variability of the frequency of occurrence. However, there is a non-negligible spread among the models which is stronger in summer than in winter. This spread is due to two reasons: (1) we are dealing with different models and (2) each RCM produces an internal variability. As far as the day-to-day weather regime history is concerned, the ensemble shows large discrepancies. At daily time scale, the model spread has also a seasonal dependence, being stronger in summer than in winter. Results also show that the spectral nudging technique improves the model performance in reproducing the large-scale of the driving field. In addition, the impact of increasing the number of grid points has been addressed by comparing the 25 and 50 km experiments. We show that the horizontal resolution does not affect significantly the model performance for large-scale circulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Liu, Yangang
2014-12-18
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less
Buck, D; Jacoby, A; Baker, G A; Ley, H; Steen, N
1999-12-01
To examine between-country differences in health-related quality of life (HRQOL) of adults with epilepsy across a large number of European countries. Self-completion postal questionnaire sent to large sample of adults with epilepsy, recruited from epilepsy support groups or epilepsy outpatient clinics. The questionnaire was developed in English and translated. Back-translations from each language were checked for accuracy. The questionnaire sought information on clinical and socio-demographic details, and contained a number of previously validated scales of psychosocial well-being (the SF-36, the perceived impact of epilepsy scale, and a feelings of stigma scale). Controlling for socio-demographic and clinical characteristics, significant between-country differences were found in scores on the perceived impact of epilepsy scale, on seven of the eight SF-36 domains, and on the feelings of stigma scale. Respondents in Spain and the Netherlands fared consistently better, whilst those in France fared poorest, compared to those in other countries in terms of the various HRQOL measures used. Several possible reasons for the cross-cultural differences in HRQOL are proposed. Clearly, there is no single explanation and there may also be reasons which we have overlooked. This study emphasises the need for further comprehensive research in order that the position of people with epilepsy in different countries be more thoroughly understood in the social context.
Access control and privacy in large distributed systems
NASA Technical Reports Server (NTRS)
Leiner, B. M.; Bishop, M.
1986-01-01
Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.
Biotechnology: herbicide-resistant crops
USDA-ARS?s Scientific Manuscript database
Transgenic, herbicide-resistant (HR) crops are planted on about 80% of the land covered by transgenic crops. More than 90% of HR crios are glyphosate-resistant (GR) crops, the others being resistant to glufosinate. The wide-scale adoption of HR crops, largely for economic reasons, has been the mos...
A large scale test of the gaming-enhancement hypothesis.
Przybylski, Andrew K; Wang, John C
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.
Efficient design of clinical trials and epidemiological research: is it possible?
Lauer, Michael S; Gordon, David; Wei, Gina; Pearson, Gail
2017-08-01
Randomized clinical trials and large-scale, cohort studies continue to have a critical role in generating evidence in cardiovascular medicine; however, the increasing concern is that ballooning costs threaten the clinical trial enterprise. In this Perspectives article, we discuss the changing landscape of clinical research, and clinical trials in particular, focusing on reasons for the increasing costs and inefficiencies. These reasons include excessively complex design, overly restrictive inclusion and exclusion criteria, burdensome regulations, excessive source-data verification, and concerns about the effect of clinical research conduct on workflow. Thought leaders have called on the clinical research community to consider alternative, transformative business models, including those models that focus on simplicity and leveraging of digital resources. We present some examples of innovative approaches by which some investigators have successfully conducted large-scale, clinical trials at relatively low cost. These examples include randomized registry trials, cluster-randomized trials, adaptive trials, and trials that are fully embedded within digital clinical care or administrative platforms.
Study on reasonable curtailment rate of large scale renewable energy
NASA Astrophysics Data System (ADS)
Li, Nan; Yuan, Bo; Zhang, Fuqiang
2018-02-01
Energy curtailment rate of renewable energy generation is an important indicator to measure renewable energy consumption, it is also an important parameters to determine the other power sources and grids arrangement in the planning stage. In general, to consume the spike power of the renewable energy which is just a small proportion, it is necessary to dispatch a large number of peaking resources, which will reduce the safety and stability of the system. In planning aspect, if it is allowed to give up a certain amount of renewable energy, overall peaking demand of the system will be reduced, the peak power supply construction can be put off to avoid the expensive cost of marginal absorption. In this paper, we introduce the reasonable energy curtailment rate into the power system planning, and use the GESP power planning software, conclude that the reasonable energy curtailment rate of the regional grids in China is 3% -10% in 2020.
Calibration of the Test of Relational Reasoning.
Dumas, Denis; Alexander, Patricia A
2016-10-01
Relational reasoning, or the ability to discern meaningful patterns within a stream of information, is a critical cognitive ability associated with academic and professional success. Importantly, relational reasoning has been described as taking multiple forms, depending on the type of higher order relations being drawn between and among concepts. However, the reliable and valid measurement of such a multidimensional construct of relational reasoning has been elusive. The Test of Relational Reasoning (TORR) was designed to tap 4 forms of relational reasoning (i.e., analogy, anomaly, antinomy, and antithesis). In this investigation, the TORR was calibrated and scored using multidimensional item response theory in a large, representative undergraduate sample. The bifactor model was identified as the best-fitting model, and used to estimate item parameters and construct reliability. To improve the usefulness of the TORR to educators, scaled scores were also calculated and presented. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Scaling A Moment-Rate Function For Small To Large Magnitude Events
NASA Astrophysics Data System (ADS)
Archuleta, Ralph; Ji, Chen
2017-04-01
Since the 1980's seismologists have recognized that peak ground acceleration (PGA) and peak ground velocity (PGV) scale differently with magnitude for large and moderate earthquakes. In a recent paper (Archuleta and Ji, GRL 2016) we introduced an apparent moment-rate function (aMRF) that accurately predicts the scaling with magnitude of PGA, PGV, PWA (Wood-Anderson Displacement) and the ratio PGA/2πPGV (dominant frequency) for earthquakes 3.3 ≤ M ≤ 5.3. This apparent moment-rate function is controlled by two temporal parameters, tp and td, which are related to the time for the moment-rate function to reach its peak amplitude and the total duration of the earthquake, respectively. These two temporal parameters lead to a Fourier amplitude spectrum (FAS) of displacement that has two corners in between which the spectral amplitudes decay as 1/f, f denotes frequency. At higher or lower frequencies, the FAS of the aMRF looks like a single-corner Aki-Brune omega squared spectrum. However, in the presence of attenuation the higher corner is almost certainly masked. Attempting to correct the spectrum to an Aki-Brune omega-squared spectrum will produce an "apparent" corner frequency that falls between the double corner frequency of the aMRF. We reason that the two corners of the aMRF are the reason that seismologists deduce a stress drop (e.g., Allmann and Shearer, JGR 2009) that is generally much smaller than the stress parameter used to produce ground motions from stochastic simulations (e.g., Boore, 2003 Pageoph.). The presence of two corners for the smaller magnitude earthquakes leads to several questions. Can deconvolution be successfully used to determine scaling from small to large earthquakes? Equivalently will large earthquakes have a double corner? If large earthquakes are the sum of many smaller magnitude earthquakes, what should the displacement FAS look like for a large magnitude earthquake? Can a combination of such a double-corner spectrum and random vibration theory explain the PGA, PGV scaling relationships for larger magnitude?
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
NASA Astrophysics Data System (ADS)
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
Secondary Students' Stable and Unstable Optics Conceptions Using Contextualized Questions
ERIC Educational Resources Information Center
Chu, Hye-Eun; Treagust, David F.
2014-01-01
This study focuses on elucidating and explaining reasons for the stability of and interrelationships between students' conceptions about "Light Propagation" and "Visibility of Objects" using contextualized questions across 3 years of secondary schooling from Years 7 to 9. In a large-scale quantitative study involving 1,233…
Astronomy Demonstrations and Models.
ERIC Educational Resources Information Center
Eckroth, Charles A.
Demonstrations in astronomy classes seem to be more necessary than in physics classes for three reasons. First, many of the events are very large scale and impossibly remote from human senses. Secondly, while physics courses use discussions of one- and two-dimensional motion, three-dimensional motion is the normal situation in astronomy; thus,…
Achievement against the Odds: The Female Secondary Headteachers in England and Wales.
ERIC Educational Resources Information Center
Coleman, Marianne
2001-01-01
Examines reasons for the paucity of female secondary headteachers, employing a large-scale survey of all English and Welsh secondary principals that achieved a 70 percent response rate. Considers demographic characteristics, work constraints associated with domestic commitments, and overt and covert discrimination factors. High discrimination…
ERIC Educational Resources Information Center
Friedler, Y.; And Others
This study identified students' conceptual difficulties in understanding concepts and processes associated with cell water relationships (osmosis), determined possible reasons for these difficulties, and pilot-tested instruments and research strategies for a large scale comprehensive study. Research strategies used included content analysis of…
Large-Scale Hybrid Motor Testing. Chapter 10
NASA Technical Reports Server (NTRS)
Story, George
2006-01-01
Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.
2010-03-01
are turning out to be counterproductive because they are culturally anathema (Wollman, 2009). A consideration of psychological tenets by Sigmund ... Freud suggests a principle aspect of dysfunction in collaboration. He reasoned that An Ego governed by social convention and a Superego governed by
The Factors and Impacts of Large-Scale Digital Content Accreditations
ERIC Educational Resources Information Center
Kuo, Tony C. T.; Chen, Hong-Ren; Hwang, Wu-Yuin; Chen, Nian-Shing
2015-01-01
E-learning is an important and widespread contemporary trend in education. Because its success depends on the quality of digital materials, the mechanism by which such materials are accredited has received considerable attention and has influenced the design and implementation of digital courseware. For this reason, this study examined the…
Whatever Happened to the New English?
ERIC Educational Resources Information Center
Tibbetts, Arn; Tibbetts, Charlene
1977-01-01
Late in the summer of 1966, the so-called Dartmouth Conference--the first large-scale international conference on English teaching--was convened at Dartmouth College. Here is an outline of the educational changes that took place during the development of the New English and a few reasons why it did not receive the broad approval that many…
ERIC Educational Resources Information Center
Wilcox, Bethany R.; Pollock, Steven J.
2014-01-01
Free-response research-based assessments, like the Colorado Upper-division Electrostatics Diagnostic (CUE), provide rich, fine-grained information about students' reasoning. However, because of the difficulties inherent in scoring these assessments, the majority of the large-scale conceptual assessments in physics are multiple choice. To increase…
School Finance Reform: Do Equalized Expenditures Imply Equalized Teacher Salaries?
ERIC Educational Resources Information Center
Streams, Meg; Butler, J. S.; Cowen, Joshua; Fowles, Jacob; Toma, Eugenia F.
2011-01-01
Kentucky is a poor, relatively rural state that contrasts greatly with the relatively urban and wealthy states typically the subject of education studies employing large-scale administrative data. For this reason, Kentucky's experience of major school finance and curricular reform is highly salient for understanding teacher labor market dynamics.…
Working with Large-Scale Climate Surveys: Reducing Data Complexity to Gain New Insights
ERIC Educational Resources Information Center
Chatman, Steve
2010-01-01
Although there is agreement that graduating students should be able to function effectively in an increasingly diverse society, there is reasonable difference of opinion regarding how that goal should be accomplished and how progress should be measured. The most pervasive and appealing conventional wisdom is that positive attitudes and behaviors…
Purposes and Effects of Lying.
ERIC Educational Resources Information Center
Hample, Dale
Three exploratory studies were aimed at describing the purposes of lies and the consequences of lying. Data were collected through a partly open-ended questionnaire, a content analysis of several tape-recorded interviews, and a large-scale survey. The results showed that two of every three lies were told for selfish reasons, while three of every…
The Way Deans Run Their Faculties in Indonesian Universities
ERIC Educational Resources Information Center
Ngo, Jenny; de Boer, Harry; Enders, Jurgen
2014-01-01
Using the theory of reasoned action in combination with the Competing Values Framework of organizational leadership, our study examines how deans at Indonesian universities lead and manage their faculties. Based on a large-scale survey with responses from more than 200 Indonesian deans, the study empirically identifies a number of deanship styles:…
A large scale test of the gaming-enhancement hypothesis
Wang, John C.
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work. PMID:27896035
NASA Astrophysics Data System (ADS)
Wallace, Colin; Prather, Edward; Duncan, Douglas
2011-10-01
We recently completed a large-scale, systematic study of general education introductory astronomy students' conceptual and reasoning difficulties related to cosmology. As part of this study, we analyzed a total of 4359 surveys (pre- and post-instruction) containing students' responses to questions about the Big Bang, the evolution and expansion of the universe, using Hubble plots to reason about the age and expansion rate of the universe, and using galaxy rotation curves to infer the presence of dark matter. We also designed, piloted, and validated a new suite of five cosmology Lecture-Tutorials. We found that students who use the new Lecture-Tutorials can achieve larger learning gains than their peers who did not. This material is based in part upon work supported by the National Science Foundation under Grant Nos. 0833364 and 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
NASA Astrophysics Data System (ADS)
Wallace, Colin Scott; Prather, E. E.; Duncan, D. K.; Collaboration of Astronomy Teaching Scholars CATS
2012-01-01
We recently completed a large-scale, systematic study of general education introductory astronomy students’ conceptual and reasoning difficulties related to cosmology. As part of this study, we analyzed a total of 4359 surveys (pre- and post-instruction) containing students’ responses to questions about the Big Bang, the evolution and expansion of the universe, using Hubble plots to reason about the age and expansion rate of the universe, and using galaxy rotation curves to infer the presence of dark matter. We also designed, piloted, and validated a new suite of five cosmology Lecture-Tutorials. We found that students who use the new Lecture-Tutorials can achieve larger learning gains than their peers who did not. This material is based in part upon work supported by the National Science Foundation under Grant Nos. 0833364 and 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
NASA Technical Reports Server (NTRS)
Piomelli, Ugo; Zang, Thomas A.; Speziale, Charles G.; Lund, Thomas S.
1990-01-01
An eddy viscosity model based on the renormalization group theory of Yakhot and Orszag (1986) is applied to the large-eddy simulation of transition in a flat-plate boundary layer. The simulation predicts with satisfactory accuracy the mean velocity and Reynolds stress profiles, as well as the development of the important scales of motion. The evolution of the structures characteristic of the nonlinear stages of transition is also predicted reasonably well.
Impact of entrainment on cloud droplet spectra: theory, observations, and modeling
NASA Astrophysics Data System (ADS)
Grabowski, W.
2016-12-01
Understanding the impact of entrainment and mixing on microphysical properties of warm boundary layer clouds is an important aspect of the representation of such clouds in large-scale models of weather and climate. Entrainment leads to a reduction of the liquid water content in agreement with the fundamental thermodynamics, but its impact on the droplet spectrum is difficult to quantify in observations and modeling. For in-situ (e.g., aircraft) observations, it is impossible to follow air parcels and observe processes that lead to changes of the droplet spectrum in different regions of a cloud. For similar reasons traditional modeling methodologies (e.g., the Eulerian large eddy simulation) are not useful either. Moreover, both observations and modeling can resolve only relatively narrow range of spatial scales. Theory, typically focusing on differences between idealized concepts of homogeneous and inhomogeneous mixing, is also of a limited use for the multiscale turbulent mixing between a cloud and its environment. This presentation will illustrate the above points and argue that the Lagrangian large-eddy simulation with appropriate subgrid-scale scheme may provide key insights and eventually lead to novel parameterizations for large-scale models.
A study on large-scale nudging effects in regional climate model simulation
NASA Astrophysics Data System (ADS)
Yhang, Yoo-Bin; Hong, Song-You
2011-05-01
The large-scale nudging effects on the East Asian summer monsoon (EASM) are examined using the National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM). The NCEP/DOE reanalysis data is used to provide large-scale forcings for RSM simulations, configured with an approximately 50-km grid over East Asia, centered on the Korean peninsula. The RSM with a variant of spectral nudging, that is, the scale selective bias correction (SSBC), is forced by perfect boundary conditions during the summers (June-July-August) from 1979 to 2004. The two summers of 2000 and 2004 are investigated to demonstrate the impact of SSBC on precipitation in detail. It is found that the effect of SSBC on the simulated seasonal precipitation is in general neutral without a discernible advantage. Although errors in large-scale circulation for both 2000 and 2004 are reduced by using the SSBC method, the impact on simulated precipitation is found to be negative in 2000 and positive in 2004 summers. One possible reason for a different effect is that precipitation in the summer of 2004 is characterized by a strong baroclinicity, while precipitation in 2000 is caused by thermodynamic instability. The reduction of convective rainfall over the oceans by the application of the SSBC method seems to play an important role in modeled atmosphere.
Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR
NASA Astrophysics Data System (ADS)
Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.
2017-12-01
Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Gu, Guojun; Nelkin, Eric J.; Bowman, Kenneth P.; Stocker, Erich; Wolff, David B.
2006-01-01
The TRMM Multi-satellite Precipitation Analysis (TMPA) provides a calibration-based sequential scheme for combining multiple precipitation estimates from satellites, as well as gauge analyses where feasible, at fine scales (0.25 degrees x 0.25 degrees and 3-hourly). It is available both after and in real time, based on calibration by the TRMM Combined Instrument and TRMM Microwave Imager precipitation products, respectively. Only the after-real-time product incorporates gauge data at the present. The data set covers the latitude band 50 degrees N-S for the period 1998 to the delayed present. Early validation results are as follows: The TMPA provides reasonable performance at monthly scales, although it is shown to have precipitation rate dependent low bias due to lack of sensitivity to low precipitation rates in one of the input products (based on AMSU-B). At finer scales the TMPA is successful at approximately reproducing the surface-observation-based histogram of precipitation, as well as reasonably detecting large daily events. The TMPA, however, has lower skill in correctly specifying moderate and light event amounts on short time intervals, in common with other fine-scale estimators. Examples are provided of a flood event and diurnal cycle determination.
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
Zanni, Markella V; Fitch, Kathleen; Rivard, Corinne; Sanchez, Laura; Douglas, Pamela S; Grinspoon, Steven; Smeaton, Laura; Currier, Judith S; Looby, Sara E
2017-03-01
Women's under-representation in HIV and cardiovascular disease (CVD) research suggests a need for novel strategies to ensure robust representation of women in HIV-associated CVD research. To elicit perspectives on CVD research participation among a community-sample of women with or at risk for HIV, and to apply acquired insights toward the development of an evidence-based campaign empowering older women with HIV to participate in a large-scale CVD prevention trial. In a community-based setting, we surveyed 40 women with or at risk for HIV about factors which might facilitate or impede engagement in CVD research. We applied insights derived from these surveys into the development of the Follow YOUR Heart campaign, educating women about HIV-associated CVD and empowering them to learn more about a multi-site HIV-associated CVD prevention trial: REPRIEVE. Endorsed best methods for learning about a CVD research study included peer-to-peer communication (54%), provider communication (46%) and video-based communication (39%). Top endorsed non-monetary reasons for participating in research related to gaining information (63%) and helping others (47%). Top endorsed reasons for not participating related to lack of knowledge about studies (29%) and lack of request to participate (29%). Based on survey results, the REPRIEVE Follow YOUR Heart campaign was developed. Interwoven campaign components (print materials, video, web presence) offer provider-based information/knowledge, peer-to-peer communication, and empowerment to learn more. Campaign components reflect women's self-identified motivations for research participation - education and altruism. Investigation of factors influencing women's participation in HIV-associated CVD research may be usefully applied to develop evidence-based strategies for enhancing women's enrollment in disease-specific large-scale trials. If proven efficacious, such strategies may enhance conduct of large-scale research studies across disciplines.
NASA Technical Reports Server (NTRS)
Hanley, G. M.
1980-01-01
An evolutionary Satellite Power Systems development plan was prepared. Planning analysis was directed toward the evolution of a scenario that met the stated objectives, was technically possible and economically attractive, and took into account constraining considerations, such as requirements for very large scale end-to-end demonstration in a compressed time frame, the relative cost/technical merits of ground testing versus space testing, and the need for large mass flow capability to low Earth orbit and geosynchronous orbit at reasonable cost per pound.
Acoustic scaling: A re-evaluation of the acoustic model of Manchester Studio 7
NASA Astrophysics Data System (ADS)
Walker, R.
1984-12-01
The reasons for the reconstruction and re-evaluation of the acoustic scale mode of a large music studio are discussed. The design and construction of the model using mechanical and structural considerations rather than purely acoustic absorption criteria is described and the results obtained are given. The results confirm that structural elements within the studio gave rise to unexpected and unwanted low-frequency acoustic absorption. The results also show that at least for the relatively well understood mechanisms of sound energy absorption physical modelling of the structural and internal components gives an acoustically accurate scale model, within the usual tolerances of acoustic design. The poor reliability of measurements of acoustic absorption coefficients, is well illustrated. The conclusion is reached that such acoustic scale modelling is a valid and, for large scale projects, financially justifiable technique for predicting fundamental acoustic effects. It is not appropriate for the prediction of fine details because such small details are unlikely to be reproduced exactly at a different size without extensive measurements of the material's performance at both scales.
ERIC Educational Resources Information Center
Xianzuo, Fan
2013-01-01
Beginning in the late 1990s and especially since 2000, a new round of large-scale school consolidation has been introduced in rural communities in China. What is the background of this policy initiative? How has it been introduced and implemented? This article examines these issues.
Dealing with Big Numbers: Representation and Understanding of Magnitudes outside of Human Experience
ERIC Educational Resources Information Center
Resnick, Ilyse; Newcombe, Nora S.; Shipley, Thomas F.
2017-01-01
Being able to estimate quantity is important in everyday life and for success in the STEM disciplines. However, people have difficulty reasoning about magnitudes outside of human perception (e.g., nanoseconds, geologic time). This study examines patterns of estimation errors across temporal and spatial magnitudes at large scales. We evaluated the…
Symposium on Documentation Planning in Developing Countries at Bad Godesberg, 28-30 November 1967.
ERIC Educational Resources Information Center
German Foundation for International Development, Bonn (West Germany).
One reason given for the failure of the large-scale efforts in the decade 1955-1965 to increase significantly the rate of economic and technological growth in the "developing" countries of the world has been insufficient utilization of existing information essential to this development. Motivated by this belief and the opinion that this…
Code of Federal Regulations, 2011 CFR
2011-07-01
..., for a line right-of-way in excess of 100 feet in width or for a structure or facility right-of-way of over 10,000 square feet must state the reasons why the larger right-of-way is required. Rights-of-way... drawing on a scale sufficiently large to show clearly their dimensions and relative positions. When two or...
Design Tools for Evaluating Multiprocessor Programs
1976-07-01
than large uniprocessing machines, and 2. economies of scale in manufacturing. Perhaps the most compelling reason (possibly a consequence of the...speed, redundancy, (inefficiency, resource utilization, and economies of the components. [Browne 73, Lehman 66] 6. How can the system be scheduled...mejsures are interesting about the computation? Somn may be: speed, redundancy, (inefficiency, resource utilization, and economies of the components
Some Positive Aspects of a Three-Part Lesson
ERIC Educational Resources Information Center
Pepper, Mark
2011-01-01
This author agrees with the sentiments of John Hibbs' article "Was there ever any point to the three-part lesson?" (MT219). In particular the author fully supports a flexible approach to the structure of lessons. There are two interesting questions that arise from Hibbs' article: (1) Are there reasons for the large-scale adherence of teachers to…
More than Motherhood: Reasons for Becoming a Family Day Care Provider
ERIC Educational Resources Information Center
Armenia, Amy B.
2009-01-01
This article examines motivations for entering family day care work as they relate to responsibilities of motherhood and the prominence of these motivations for the women providing day care within and across groups of workers. Using data from a large-scale representative survey of family day care workers in Illinois, the author examines the range…
An Analysis of Secondary Teachers' Reasoning with Participatory Sensing Data
ERIC Educational Resources Information Center
Gould, Robert; Bargagliotti, Anna; Johnson, Terri
2017-01-01
Participatory sensing is a data collection method in which communities of people collect and share data to investigate large-scale processes. These data have many features often associated with the big data paradigm: they are rich and multivariate, include non-numeric data, and are collected as determined by an algorithm rather than by traditional…
Deeper Look at Student Learning of Quantum Mechanics: The Case of Tunneling
ERIC Educational Resources Information Center
McKagan, S. B.; Perkins, K. K.; Wieman, C. E.
2008-01-01
We report on a large-scale study of student learning of quantum tunneling in four traditional and four transformed modern physics courses. In the transformed courses, which were designed to address student difficulties found in previous research, students still struggle with many of the same issues found in other courses. However, the reasons for…
Psychosocial Profiles of Delinquent and Nondelinquent Participants in a Sports Program.
ERIC Educational Resources Information Center
Yiannakis, Andrew
This study attempted to find reasons for the large proportion of dropouts in the federal government's National Summer Youth Sports Program. Selected scales of the Jesness Inventory were administered (value orientation, alienation, denial, and occupational aspiration) at the beginning of the program to 66 11-year-old boys enrolled in a 1971 program…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-23
... Integrated Circuit Semiconductor Chips and Products Containing the Same; Notice of Commission Decision Not To... semiconductor chips and products containing same by reason of infringement of certain claims of U.S. Patent Nos. 5,933,364 and 6,834,336. The complaint further alleges the existence of a domestic industry. The...
NASA Astrophysics Data System (ADS)
Firat, Mehmet
2017-04-01
In the past, distance education was used as a method to meet the educational needs of citizens with limited options to attend an institution of higher education. Nowadays, it has become irreplaceable in higher education thanks to developments in instructional technology. But the question of why students choose distance education is still important. The purpose of this study was to determine Turkish students' reasons for choosing distance education and to investigate how these reasons differ depending on their financial circumstances. The author used a Chi squared Automatic Interaction Detector (CHAID) analysis to determine 18,856 Turkish students' reasons for choosing distance education. Results of the research revealed that Turkish students chose distance education not because of geographical limitations, family-related problems or economic difficulties, but for such reasons as already being engaged in their profession, increasing their knowledge, and seeking promotion to a better position.
Engineering management of large scale systems
NASA Technical Reports Server (NTRS)
Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.
1989-01-01
The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.
How can we study reasoning in the brain?
Papo, David
2015-01-01
The brain did not develop a dedicated device for reasoning. This fact bears dramatic consequences. While for perceptuo-motor functions neural activity is shaped by the input's statistical properties, and processing is carried out at high speed in hardwired spatially segregated modules, in reasoning, neural activity is driven by internal dynamics and processing times, stages, and functional brain geometry are largely unconstrained a priori. Here, it is shown that the complex properties of spontaneous activity, which can be ignored in a short-lived event-related world, become prominent at the long time scales of certain forms of reasoning. It is argued that the neural correlates of reasoning should in fact be defined in terms of non-trivial generic properties of spontaneous brain activity, and that this implies resorting to concepts, analytical tools, and ways of designing experiments that are as yet non-standard in cognitive neuroscience. The implications in terms of models of brain activity, shape of the neural correlates, methods of data analysis, observability of the phenomenon, and experimental designs are discussed. PMID:25964755
How can we study reasoning in the brain?
Papo, David
2015-01-01
The brain did not develop a dedicated device for reasoning. This fact bears dramatic consequences. While for perceptuo-motor functions neural activity is shaped by the input's statistical properties, and processing is carried out at high speed in hardwired spatially segregated modules, in reasoning, neural activity is driven by internal dynamics and processing times, stages, and functional brain geometry are largely unconstrained a priori. Here, it is shown that the complex properties of spontaneous activity, which can be ignored in a short-lived event-related world, become prominent at the long time scales of certain forms of reasoning. It is argued that the neural correlates of reasoning should in fact be defined in terms of non-trivial generic properties of spontaneous brain activity, and that this implies resorting to concepts, analytical tools, and ways of designing experiments that are as yet non-standard in cognitive neuroscience. The implications in terms of models of brain activity, shape of the neural correlates, methods of data analysis, observability of the phenomenon, and experimental designs are discussed.
Hoehndorf, Robert; Dumontier, Michel; Oellrich, Anika; Rebholz-Schuhmann, Dietrich; Schofield, Paul N; Gkoutos, Georgios V
2011-01-01
Researchers design ontologies as a means to accurately annotate and integrate experimental data across heterogeneous and disparate data- and knowledge bases. Formal ontologies make the semantics of terms and relations explicit such that automated reasoning can be used to verify the consistency of knowledge. However, many biomedical ontologies do not sufficiently formalize the semantics of their relations and are therefore limited with respect to automated reasoning for large scale data integration and knowledge discovery. We describe a method to improve automated reasoning over biomedical ontologies and identify several thousand contradictory class definitions. Our approach aligns terms in biomedical ontologies with foundational classes in a top-level ontology and formalizes composite relations as class expressions. We describe the semi-automated repair of contradictions and demonstrate expressive queries over interoperable ontologies. Our work forms an important cornerstone for data integration, automatic inference and knowledge discovery based on formal representations of knowledge. Our results and analysis software are available at http://bioonto.de/pmwiki.php/Main/ReasonableOntologies.
Liou, Shwu-Ru; Liu, Hsiu-Chen; Tsai, Hsiu-Min; Tsai, Ying-Huang; Lin, Yu-Ching; Chang, Chia-Hao; Cheng, Ching-Yu
2016-03-01
The purpose of the study was to develop and psychometrically test the Nurses Clinical Reasoning Scale. Clinical reasoning is an essential skill for providing safe and quality patient care. Identifying pre-graduates' and nurses' needs and designing training courses to improve their clinical reasoning competence becomes a critical task. However, there is no instrument focusing on clinical reasoning in the nursing profession. Cross-sectional design was used. This study included the development of the scale, a pilot study that preliminary tested the readability and reliability of the developed scale and a main study that implemented and tested the psychometric properties of the developed scale. The Nurses Clinical Reasoning Scale was developed based on the Clinical Reasoning Model. The scale includes 15 items using a Likert five-point scale. Data were collected from 2013-2014. Two hundred and fifty-one participants comprising clinical nurses and nursing pre-graduates completed and returned the questionnaires in the main study. The instrument was tested for internal consistency and test-retest reliability. Its validity was tested with content, construct and known-groups validity. One factor emerged from the factor analysis. The known-groups validity was confirmed. The Cronbach's alpha for the entire instrument was 0·9. The reliability and validity of the Nurses Clinical Reasoning Scale were supported. The scale is a useful tool and can be easily administered for the self-assessment of clinical reasoning competence of clinical nurses and future baccalaureate nursing graduates. Study limitations and further recommendations are discussed. © 2015 John Wiley & Sons Ltd.
Size dependent fragmentation of argon clusters in the soft x-ray ionization regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gisselbrecht, Mathieu; Lindgren, Andreas; Burmeister, Florian
Photofragmentation of argon clusters of average size ranging from 10 up to 1000 atoms is studied using soft x-ray radiation below the 2p threshold and multicoincidence mass spectroscopy technique. For small clusters (
NASA Astrophysics Data System (ADS)
Franci, Luca; Landi, Simone; Verdini, Andrea; Matteini, Lorenzo; Hellinger, Petr
2018-01-01
Properties of the turbulent cascade from fluid to kinetic scales in collisionless plasmas are investigated by means of large-size 3D hybrid (fluid electrons, kinetic protons) particle-in-cell simulations. Initially isotropic Alfvénic fluctuations rapidly develop a strongly anisotropic turbulent cascade, mainly in the direction perpendicular to the ambient magnetic field. The omnidirectional magnetic field spectrum shows a double power-law behavior over almost two decades in wavenumber, with a Kolmogorov-like index at large scales, a spectral break around ion scales, and a steepening at sub-ion scales. Power laws are also observed in the spectra of the ion bulk velocity, density, and electric field, at both magnetohydrodynamic (MHD) and kinetic scales. Despite the complex structure, the omnidirectional spectra of all fields at ion and sub-ion scales are in remarkable quantitative agreement with those of a 2D simulation with similar physical parameters. This provides a partial, a posteriori validation of the 2D approximation at kinetic scales. Conversely, at MHD scales, the spectra of the density and of the velocity (and, consequently, of the electric field) exhibit differences between the 2D and 3D cases. Although they can be partly ascribed to the lower spatial resolution, the main reason is likely the larger importance of compressible effects in the full 3D geometry. Our findings are also in remarkable quantitative agreement with solar wind observations.
Improving the distinguishable cluster results: spin-component scaling
NASA Astrophysics Data System (ADS)
Kats, Daniel
2018-06-01
The spin-component scaling is employed in the energy evaluation to improve the distinguishable cluster approach. SCS-DCSD reaction energies reproduce reference values with a root-mean-squared deviation well below 1 kcal/mol, the interaction energies are three to five times more accurate than DCSD, and molecular systems with a large amount of static electron correlation are still described reasonably well. SCS-DCSD represents a pragmatic approach to achieve chemical accuracy with a simple method without triples, which can also be applied to multi-configurational molecular systems.
Chapter 1: Biomedical knowledge integration.
Payne, Philip R O
2012-01-01
The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems.
Chapter 1: Biomedical Knowledge Integration
Payne, Philip R. O.
2012-01-01
The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems. PMID:23300416
A new method of presentation the large-scale magnetic field structure on the Sun and solar corona
NASA Technical Reports Server (NTRS)
Ponyavin, D. I.
1995-01-01
The large-scale photospheric magnetic field, measured at Stanford, has been analyzed in terms of surface harmonics. Changes of the photospheric field which occur within whole solar rotation period can be resolved by this analysis. For this reason we used daily magnetograms of the line-of-sight magnetic field component observed from Earth over solar disc. We have estimated the period during which day-to-day full disc magnetograms must be collected. An original algorithm was applied to resolve time variations of spherical harmonics that reflect time evolution of large-scale magnetic field within solar rotation period. This method of magnetic field presentation can be useful enough in lack of direct magnetograph observations due to sometimes bad weather conditions. We have used the calculated surface harmonics to reconstruct the large-scale magnetic field structure on the source surface near the sun - the origin of heliospheric current sheet and solar wind streams. The obtained results have been compared with spacecraft in situ observations and geomagnetic activity. We tried to show that proposed technique can trace shon-time variations of heliospheric current sheet and short-lived solar wind streams. We have compared also our results with those obtained traditionally from potential field approximation and extrapolation using synoptic charts as initial boundary conditions.
Exploring Entrainment Patterns of Human Emotion in Social Media
Luo, Chuan; Zhang, Zhu
2016-01-01
Emotion entrainment, which is generally defined as the synchronous convergence of human emotions, performs many important social functions. However, what the specific mechanisms of emotion entrainment are beyond in-person interactions, and how human emotions evolve under different entrainment patterns in large-scale social communities, are still unknown. In this paper, we aim to examine the massive emotion entrainment patterns and understand the underlying mechanisms in the context of social media. As modeling emotion dynamics on a large scale is often challenging, we elaborate a pragmatic framework to characterize and quantify the entrainment phenomenon. By applying this framework on the datasets from two large-scale social media platforms, we find that the emotions of online users entrain through social networks. We further uncover that online users often form their relations via dual entrainment, while maintain it through single entrainment. Remarkably, the emotions of online users are more convergent in nonreciprocal entrainment. Building on these findings, we develop an entrainment augmented model for emotion prediction. Experimental results suggest that entrainment patterns inform emotion proximity in dyads, and encoding their associations promotes emotion prediction. This work can further help us to understand the underlying dynamic process of large-scale online interactions and make more reasonable decisions regarding emergency situations, epidemic diseases, and political campaigns in cyberspace. PMID:26953692
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
NASA Astrophysics Data System (ADS)
Li, Lee; Liu, Lun; Liu, Yun-Long; Bin, Yu; Ge, Ya-Feng; Lin, Fo-Chang
2014-01-01
Atmospheric air diffuse plasmas have enormous application potential in various fields of science and technology. Without dielectric barrier, generating large-scale air diffuse plasmas is always a challenging issue. This paper discusses and analyses the formation mechanism of cold homogenous plasma. It is proposed that generating stable diffuse atmospheric plasmas in open air should meet the three conditions: high transient power with low average power, excitation in low average E-field with locally high E-field region, and multiple overlapping electron avalanches. Accordingly, an experimental configuration of generating large-scale barrier-free diffuse air plasmas is designed. Based on runaway electron theory, a low duty-ratio, high voltage repetitive nanosecond pulse generator is chosen as a discharge excitation source. Using the wire-electrodes with small curvature radius, the gaps with highly non-uniform E-field are structured. Experimental results show that the volume-scaleable, barrier-free, homogeneous air non-thermal plasmas have been obtained between the gap spacing with the copper-wire electrodes. The area of air cold plasmas has been up to hundreds of square centimeters. The proposed formation conditions of large-scale barrier-free diffuse air plasmas are proved to be reasonable and feasible.
Exploring Entrainment Patterns of Human Emotion in Social Media.
He, Saike; Zheng, Xiaolong; Zeng, Daniel; Luo, Chuan; Zhang, Zhu
2016-01-01
Emotion entrainment, which is generally defined as the synchronous convergence of human emotions, performs many important social functions. However, what the specific mechanisms of emotion entrainment are beyond in-person interactions, and how human emotions evolve under different entrainment patterns in large-scale social communities, are still unknown. In this paper, we aim to examine the massive emotion entrainment patterns and understand the underlying mechanisms in the context of social media. As modeling emotion dynamics on a large scale is often challenging, we elaborate a pragmatic framework to characterize and quantify the entrainment phenomenon. By applying this framework on the datasets from two large-scale social media platforms, we find that the emotions of online users entrain through social networks. We further uncover that online users often form their relations via dual entrainment, while maintain it through single entrainment. Remarkably, the emotions of online users are more convergent in nonreciprocal entrainment. Building on these findings, we develop an entrainment augmented model for emotion prediction. Experimental results suggest that entrainment patterns inform emotion proximity in dyads, and encoding their associations promotes emotion prediction. This work can further help us to understand the underlying dynamic process of large-scale online interactions and make more reasonable decisions regarding emergency situations, epidemic diseases, and political campaigns in cyberspace.
A study on the required performance of a 2G HTS wire for HTS wind power generators
NASA Astrophysics Data System (ADS)
Sung, Hae-Jin; Park, Minwon; Go, Byeong-Soo; Yu, In-Keun
2016-05-01
YBCO or REBCO coated conductor (2G) materials are developed for their superior performance at high magnetic field and temperature. Power system applications based on high temperature superconducting (HTS) 2G wire technology are attracting attention, including large-scale wind power generators. In particular, to solve problems associated with the foundations and mechanical structure of offshore wind turbines, due to the large diameter and heavy weight of the generator, an HTS generator is suggested as one of the key technologies. Many researchers have tried to develop feasible large-scale HTS wind power generator technologies. In this paper, a study on the required performance of a 2G HTS wire for large-scale wind power generators is discussed. A 12 MW class large-scale wind turbine and an HTS generator are designed using 2G HTS wire. The total length of the 2G HTS wire for the 12 MW HTS generator is estimated, and the essential prerequisites of the 2G HTS wire based generator are described. The magnetic field distributions of a pole module are illustrated, and the mechanical stress and strain of the pole module are analysed. Finally, a reasonable price for 2G HTS wire for commercialization of the HTS generator is suggested, reflecting the results of electromagnetic and mechanical analyses of the generator.
A flow resistance model for assessing the impact of vegetation on flood routing mechanics
NASA Astrophysics Data System (ADS)
Katul, Gabriel G.; Poggi, Davide; Ridolfi, Luca
2011-08-01
The specification of a flow resistance factor to account for vegetative effects in the Saint-Venant equation (SVE) remains uncertain and is a subject of active research in flood routing mechanics. Here, an analytical model for the flow resistance factor is proposed for submerged vegetation, where the water depth is commensurate with the canopy height and the roughness Reynolds number is sufficiently large so as to ignore viscous effects. The analytical model predicts that the resistance factor varies with three canonical length scales: the adjustment length scale that depends on the foliage drag and leaf area density, the canopy height, and the water level. These length scales can reasonably be inferred from a range of remote sensing products making the proposed flow resistance model eminently suitable for operational flood routing. Despite the numerous simplifications, agreement between measured and modeled resistance factors and bulk velocities is reasonable across a range of experimental and field studies. The proposed model asymptotically recovers the flow resistance formulation when the water depth greatly exceeds the canopy height. This analytical treatment provides a unifying framework that links the resistance factor to a number of concepts and length scales already in use to describe canopy turbulence. The implications of the coupling between the resistance factor and the water depth on solutions to the SVE are explored via a case study, which shows a reasonable match between empirical design standard and theoretical predictions.
ERIC Educational Resources Information Center
Ward-King, Jessica; Cohen, Ira L.; Penning, Henderika; Holden, Jeanette J. A.
2010-01-01
The Autism Diagnostic Interview-Revised is one of the "gold standard" diagnostic tools for autism spectrum disorders. It is traditionally administered face-to-face. Cost and geographical concerns constrain the employment of the ADI-R for large-scale research projects. The telephone interview is a reasonable alternative, but has not yet been…
Architecting the Safety Assessment of Large-scale Systems Integration
2009-12-01
Electromagnetic Radiation to Ordnance ( HERO ) Hazards of Electromagnetic Radiation to Fuel (HERF) The main reason that this particular safety study... radiation , high voltage electric shocks and explosives safety. 1. Radiation Hazards (RADHAZ) RADHAZ describes the hazards of electromagnetic radiation ...OP3565/NAVAIR 16-1-529 [19 and 20], these hazards are segregated as follows: Hazards of Electromagnetic
Implementation of Large-Scale Science Curricula: A Study in Seven European Countries
ERIC Educational Resources Information Center
Pilling, G. M.; Waddington, D. J.
2005-01-01
The Salters Chemistry courses, context-led curricula for 13-16 and 17-18 year old students, first developed by the Science Education Group at the University of York in the UK, have now been translated and/or adapted in seven other European countries. This paper describes and discusses the different reasons for taking up the courses, the ways in…
A novel heuristic algorithm for capacitated vehicle routing problem
NASA Astrophysics Data System (ADS)
Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre
2017-09-01
The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.
NASA Technical Reports Server (NTRS)
Akle, W.
1983-01-01
This study report defines a set of tests and measurements required to characterize the performance of a Large Space System (LSS), and to scale this data to other LSS satellites. Requirements from the Mobile Communication Satellite (MSAT) configurations derived in the parent study were used. MSAT utilizes a large, mesh deployable antenna, and encompasses a significant range of LSS technology issues in the areas of structural/dynamics, control, and performance predictability. In this study, performance requirements were developed for the antenna. Special emphasis was placed on antenna surface accuracy, and pointing stability. Instrumentation and measurement systems, applicable to LSS, were selected from existing or on-going technology developments. Laser ranging and angulation systems, presently in breadboard status, form the backbone of the measurements. Following this, a set of ground, STS, and GEO-operational were investigated. A third scale (15 meter) antenna system as selected for ground characterization followed by STS flight technology development. This selection ensures analytical scaling from ground-to-orbit, and size scaling. Other benefits are cost and ability to perform reasonable ground tests. Detail costing of the various tests and measurement systems were derived and are included in the report.
Bearman, Chris; Grunwald, Jared A; Brooks, Benjamin P; Owen, Christine
2015-03-01
Emergency situations are by their nature difficult to manage and success in such situations is often highly dependent on effective team coordination. Breakdowns in team coordination can lead to significant disruption to an operational response. Breakdowns in coordination were explored in three large-scale bushfires in Australia: the Kilmore East fire, the Wangary fire, and the Canberra Firestorm. Data from these fires were analysed using a top-down and bottom-up qualitative analysis technique. Forty-four breakdowns in coordinated decision making were identified, which yielded 83 disconnects grouped into three main categories: operational, informational and evaluative. Disconnects were specific instances where differences in understanding existed between team members. The reasons why disconnects occurred were largely consistent across the three sets of data. In some cases multiple disconnects occurred in a temporal manner, which suggested some evidence of disconnects creating states that were conducive to the occurrence of further disconnects. In terms of resolution, evaluative disconnects were nearly always resolved however operational and informational disconnects were rarely resolved effectively. The exploratory data analysis and discussion presented here represents the first systematic research to provide information about the reasons why breakdowns occur in emergency management and presents an account of how team processes can act to disrupt coordination and the operational response. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Cultural Differences in Justificatory Reasoning
ERIC Educational Resources Information Center
Soong, Hannah; Lee, Richard; John, George
2012-01-01
Justificatory reasoning, the ability to justify one's beliefs and actions, is an important goal of education. We develop a scale to measure the three forms of justificatory reasoning--absolutism, relativism, and evaluativism--before validating the scale across two cultures and domains. The results show that the scale possessed validity and…
More Reasons to be Straightforward: Findings and Norms for Two Scales Relevant to Social Anxiety
Rodebaugh, Thomas L.; Heimberg, Richard G.; Brown, Patrick J.; Fernandez, Katya C.; Blanco, Carlos; Schneier, Franklin R.; Liebowitz, Michael R.
2011-01-01
The validity of both the Social Interaction Anxiety Scale and Brief Fear of Negative Evaluation scale has been well-supported, yet the scales have a small number of reverse-scored items that may detract from the validity of their total scores. The current study investigates two characteristics of participants that may be associated with compromised validity of these items: higher age and lower levels of education. In community and clinical samples, the validity of each scale's reverse-scored items was moderated by age, years of education, or both. The straightforward items did not show this pattern. To encourage the use of the straightforward items of these scales, we provide normative data from the same samples as well as two large student samples. We contend that although response bias can be a substantial problem, the reverse-scored questions of these scales do not solve that problem and instead decrease overall validity. PMID:21388781
Fabbrini, G; Abbruzzese, G; Barone, P; Antonini, A; Tinazzi, M; Castegnaro, G; Rizzoli, S; Morisky, D E; Lessi, P; Ceravolo, R
2013-11-01
Information about patients' adherence to therapy represents a primary issue in Parkinson's disease (PD) management. To perform the linguistic validation of the Italian version of the self-rated 8-Item Morisky Medical Adherence Scale (MMAS-8) and to describe in a sample of Italian patients affected by PD the adherence to anti-Parkinson drug therapy and the association between adherence and some socio-demographic and clinical features. MMAS-8 was translated into Italian language by two independent Italian mother-tongue translators. The consensus version was then back-translated by an English mother-tongue translator. This translation process was followed by a consensus meeting between the authors of translation and investigators and then by two comprehension tests. The translated version of the MMAS-8 scale was then administered at the baseline visit of the "REASON" study (Italian Study on the Therapy Management in Parkinson's disease: Motor, Non-Motor, Adherence and Quality Of Life Factors) in a large sample of PD patients. The final version of the MMAS-8 was easily understood. Mean ± SD MMAS-8 score was 6.1 ± 1.2. There were no differences in adherence to therapy in relationship to disease severity, gender, educational level or decision to change therapy. The Italian version of MMAS-8, the key tool of the REASON study to assess the adherence to therapy, has shown to be understandable to patients with PD. Patients enrolled in the REASON study showed medium therapy adherence.
Hussein, Shereen
2017-11-01
Demographic trends escalate the demands for formal long-term care (LTC) in the majority of the developed world. The LTC workforce is characterised by its very low wages, the actual scale of which is less well known. This article investigates the scale of poverty-pay in the feminised LTC sector and attempts to understand the perceived reasons behind persisting low wages in the sector. The analysis makes use of large national workforce pay data and a longitudinal survey of care workers, as well as interviews with key stakeholders in the sector. The analysis suggests that there are at least between 10 and 13% of care workers who are effectively being paid under the National Minimum Wage in England. Thematic qualitative analysis of 300 interviews with employers, care workers and service users highlight three key explanatory factors of low pay: the intrinsic nature of LTC work, the value of caring for older people, and marketisation and outsourcing of services. © 2017 John Wiley & Sons Ltd.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-04-10
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-01-01
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270
Swan, G E; Carmelli, D; Dame, A; Rosenman, R H; Spielberger, C D
1992-05-01
The psychological correlates of the Rationality/Emotional Defensiveness Scale and its two subscales were examined in 1236 males and 863 females from the Western Collaborative Group Study. An additional 157 males and 164 females with some form of cancer other than of the skin were also included in this analysis. Characteristics measured included self-reported emotional control, anger expression, trait personality, depressive and neurotic symptomatology, Type A behavior, hostility, and social desirability. Results indicate that the Rationality/Emotional Defensiveness Scale is most strongly related to the suppression and control of emotions, especially anger. Scores on this scale also tend to be associated with less Type A behavior and hostility and with more social conformity. Analysis of the component subscale suggests that Antiemotionality, i.e. the extent to which an individual uses reason and logic to avoid interpersonally related emotions, is most strongly marked by the control of anger, while Rationality, i.e. the extent to which an individual uses reason and logic as a general approach to coping with the environment, is related to the control of anxiety and a higher level of trait curiosity. The psychological interpretation of the scale appears to be largely invariant across gender, unaffected by residualization of the total scale score for its association with Social Desirability, and, except for a few minor instances, unrelated to the diagnosis of cancer.
Design rules for quasi-linear nonlinear optical structures
NASA Astrophysics Data System (ADS)
Lytel, Richard; Mossman, Sean M.; Kuzyk, Mark G.
2015-09-01
The maximization of the intrinsic optical nonlinearities of quantum structures for ultrafast applications requires a spectrum scaling as the square of the energy eigenstate number or faster. This is a necessary condition for an intrinsic response approaching the fundamental limits. A second condition is a design generating eigenstates whose ground and lowest excited state probability densities are spatially separated to produce large differences in dipole moments while maintaining a reasonable spatial overlap to produce large off-diagonal transition moments. A structure whose design meets both conditions will necessarily have large first or second hyperpolarizabilities. These two conditions are fundamental heuristics for the design of any nonlinear optical structure.
NASA Astrophysics Data System (ADS)
Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu
2017-09-01
An essential task in evaluating global water resource and pollution problems is to obtain the optimum set of parameters in hydrological models through calibration and validation. For a large-scale watershed, single-site calibration and validation may ignore spatial heterogeneity and may not meet the needs of the entire watershed. The goal of this study is to apply a multi-site calibration and validation of the Soil andWater Assessment Tool (SWAT), using the observed flow data at three monitoring sites within the Baihe watershed of the Miyun Reservoir watershed, China. Our results indicate that the multi-site calibration parameter values are more reasonable than those obtained from single-site calibrations. These results are mainly due to significant differences in the topographic factors over the large-scale area, human activities and climate variability. The multi-site method involves the division of the large watershed into smaller watersheds, and applying the calibrated parameters of the multi-site calibration to the entire watershed. It was anticipated that this case study could provide experience of multi-site calibration in a large-scale basin, and provide a good foundation for the simulation of other pollutants in followup work in the Miyun Reservoir watershed and other similar large areas.
NASA Astrophysics Data System (ADS)
Orr, Matthew; Hopkins, Philip F.
2018-06-01
I will present a simple model of non-equilibrium star formation and its relation to the scatter in the Kennicutt-Schmidt relation and large-scale star formation efficiencies in galaxies. I will highlight the importance of a hierarchy of timescales, between the galaxy dynamical time, local free-fall time, the delay time of stellar feedback, and temporal overlap in observables, in setting the scatter of the observed star formation rates for a given gas mass. Further, I will talk about how these timescales (and their associated duty-cycles of star formation) influence interpretations of the large-scale star formation efficiency in reasonably star-forming galaxies. Lastly, the connection with galactic centers and out-of-equilibrium feedback conditions will be mentioned.
Synthetic carbohydrate: An aid to nutrition in the future
NASA Technical Reports Server (NTRS)
Berman, G. A. (Editor); Murashige, K. H. (Editor)
1973-01-01
The synthetic production of carbohydrate on a large scale is discussed. Three possible nonagricultural methods of making starch are presented in detail and discussed. The simplest of these, the hydrolysis of cellulose wastes to glucose followed by polymerization to starch, appears a reasonable and economic supplement to agriculture at the present time. The conversion of fossil fuels to starch was found to be not competitive with agriculture at the present time, but tractable enough to allow a reasonable plant design to be made. A reconstruction of the photosynthetic process using isolated enzyme systems proved technically much more difficult than either of the other two processes. Particular difficulties relate to the replacement of expensive energy carrying compounds, separation of similar materials, and processing of large reactant volumes. Problem areas were pinpointed, and technological progress necessary to permit such a system to become practical is described.
Friction in debris flows: inferences from large-scale flume experiments
Iverson, Richard M.; LaHusen, Richard G.; ,
1993-01-01
A recently constructed flume, 95 m long and 2 m wide, permits systematic experimentation with unsteady, nonuniform flows of poorly sorted geological debris. Preliminary experiments with water-saturated mixtures of sand and gravel show that they flow in a manner consistent with Coulomb frictional behavior. The Coulomb flow model of Savage and Hutter (1989, 1991), modified to include quasi-static pore-pressure effects, predicts flow-front velocities and flow depths reasonably well. Moreover, simple scaling analyses show that grain friction, rather than liquid viscosity or grain collisions, probably dominates shear resistance and momentum transport in the experimental flows. The same scaling indicates that grain friction is also important in many natural debris flows.
Lejiang Yu; Shiyuan Zhong; Lisi Pei; Xindi (Randy) Bian; Warren E. Heilman
2016-01-01
The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for...
Regional Mass Atrocity Prevention and Response Operations in a World of Overlapping Boundaries
2012-05-17
large scale, to include mass sexual violence, murder, genocide , displacement, or starvation.11 One might reasonably ask what the criteria are for...until well after the genocide was over. If America had intervened in Rwanda, it would likely have required a regional approach as militant groups used...civilians; these support her case. 37 Likewise, Marchak has pointed out that the Genocide
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
TRACING THE MAGNETIC FIELD MORPHOLOGY OF THE LUPUS I MOLECULAR CLOUD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franco, G. A. P.; Alves, F. O., E-mail: franco@fisica.ufmg.br, E-mail: falves@mpe.mpg.de
2015-07-01
Deep R-band CCD linear polarimetry collected for fields with lines of sight toward the Lupus I molecular cloud is used to investigate the properties of the magnetic field within this molecular cloud. The observed sample contains about 7000 stars, almost 2000 of them with a polarization signal-to-noise ratio larger than 5. These data cover almost the entire main molecular cloud and also sample two diffuse infrared patches in the neighborhood of Lupus I. The large-scale pattern of the plane-of-sky projection of the magnetic field is perpendicular to the main axis of Lupus I, but parallel to the two diffuse infraredmore » patches. A detailed analysis of our polarization data combined with the Herschel/SPIRE 350 μm dust emission map shows that the principal filament of Lupus I is constituted by three main clumps that are acted on by magnetic fields that have different large-scale structural properties. These differences may be the reason for the observed distribution of pre- and protostellar objects along the molecular cloud and the cloud’s apparent evolutionary stage. On the other hand, assuming that the magnetic field is composed of large-scale and turbulent components, we find that the latter is rather similar in all three clumps. The estimated plane-of-sky component of the large-scale magnetic field ranges from about 70 to 200 μG in these clumps. The intensity increases toward the Galactic plane. The mass-to-magnetic flux ratio is much smaller than unity, implying that Lupus I is magnetically supported on large scales.« less
Lebedev, Alexander V; Nilsson, Jonna; Lövdén, Martin
2018-07-01
Researchers have proposed that solving complex reasoning problems, a key indicator of fluid intelligence, involves the same cognitive processes as solving working memory tasks. This proposal is supported by an overlap of the functional brain activations associated with the two types of tasks and by high correlations between interindividual differences in performance. We replicated these findings in 53 older participants but also showed that solving reasoning and working memory problems benefits from different configurations of the functional connectome and that this dissimilarity increases with a higher difficulty load. Specifically, superior performance in a typical working memory paradigm ( n-back) was associated with upregulation of modularity (increased between-network segregation), whereas performance in the reasoning task was associated with effective downregulation of modularity. We also showed that working memory training promotes task-invariant increases in modularity. Because superior reasoning performance is associated with downregulation of modular dynamics, training may thus have fostered an inefficient way of solving the reasoning tasks. This could help explain why working memory training does little to promote complex reasoning performance. The study concludes that complex reasoning abilities cannot be reduced to working memory and suggests the need to reconsider the feasibility of using working memory training interventions to attempt to achieve effects that transfer to broader cognition.
Uncovering Nature’s 100 TeV Particle Accelerators in the Large-Scale Jets of Quasars
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, Eileen; Sparks, William B.; Perlman, Eric S.; Van Der Marel, Roeland P.; Anderson, Jay; Sohn, S. Tony; Biretta, John A.; Norman, Colin Arthur; Chiaberge, Marco
2016-04-01
Since the first jet X-ray detections sixteen years ago the adopted paradigm for the X-ray emission has been the IC/CMB model that requires highly relativistic (Lorentz factors of 10-20), extremely powerful (sometimes super-Eddington) kpc scale jets. R I will discuss recently obtained strong evidence, from two different avenues, IR to optical polarimetry for PKS 1136-135 and gamma-ray observations for 3C 273 and PKS 0637-752, ruling out the EC/CMB model. Our work constrains the jet Lorentz factors to less than ~few, and leaves as the only reasonable alternative synchrotron emission from ~100 TeV jet electrons, accelerated hundreds of kpc away from the central engine. This refutes over a decade of work on the jet X-ray emission mechanism and overall energetics and, if confirmed in more sources, it will constitute a paradigm shift in our understanding of powerful large scale jets and their role in the universe. Two important findings emerging from our work will also discussed be: (i) the solid angle-integrated luminosity of the large scale jet is comparable to that of the jet core, contrary to the current belief that the core is the dominant jet radiative outlet and (ii) the large scale jets are the main source of TeV photon in the universe, something potentially important, as TeV photons have been suggested to heat up the intergalactic medium and reduce the number of dwarf galaxies formed.
Sánchez, R; Carreras, B A; van Milligen, B Ph
2005-01-01
The fluid limit of a recently introduced family of nonintegrable (nonlinear) continuous-time random walks is derived in terms of fractional differential equations. In this limit, it is shown that the formalism allows for the modeling of the interaction between multiple transport mechanisms with not only disparate spatial scales but also different temporal scales. For this reason, the resulting fluid equations may find application in the study of a large number of nonlinear multiscale transport problems, ranging from the study of self-organized criticality to the modeling of turbulent transport in fluids and plasmas.
Bottiglione, F; Carbone, G
2015-01-14
The apparent contact angle of large 2D drops with randomly rough self-affine profiles is numerically investigated. The numerical approach is based upon the assumption of large separation of length scales, i.e. it is assumed that the roughness length scales are much smaller than the drop size, thus making it possible to treat the problem through a mean-field like approach relying on the large-separation of scales. The apparent contact angle at equilibrium is calculated in all wetting regimes from full wetting (Wenzel state) to partial wetting (Cassie state). It was found that for very large values of the roughness Wenzel parameter (r(W) > -1/ cos θ(Y), where θ(Y) is the Young's contact angle), the interface approaches the perfect non-wetting condition and the apparent contact angle is almost equal to 180°. The results are compared with the case of roughness on one single scale (sinusoidal surface) and it is found that, given the same value of the Wenzel roughness parameter rW, the apparent contact angle is much larger for the case of a randomly rough surface, proving that the multi-scale character of randomly rough surfaces is a key factor to enhance superhydrophobicity. Moreover, it is shown that for millimetre-sized drops, the actual drop pressure at static equilibrium weakly affects the wetting regime, which instead seems to be dominated by the roughness parameter. For this reason a methodology to estimate the apparent contact angle is proposed, which relies only upon the micro-scale properties of the rough surface.
Demonstration-Scale High-Cell-Density Fermentation of Pichia pastoris.
Liu, Wan-Cang; Zhu, Ping
2018-01-01
Pichia pastoris has been one of the most successful heterologous overexpression systems in generating proteins for large-scale production through high-cell-density fermentation. However, optimizing conditions of the large-scale high-cell-density fermentation for biochemistry and industrialization is usually a laborious and time-consuming process. Furthermore, it is often difficult to produce authentic proteins in large quantities, which is a major obstacle for functional and structural features analysis and industrial application. For these reasons, we have developed a protocol for efficient demonstration-scale high-cell-density fermentation of P. pastoris, which employs a new methanol-feeding strategy-biomass-stat strategy and a strategy of increased air pressure instead of pure oxygen supplement. The protocol included three typical stages of glycerol batch fermentation (initial culture phase), glycerol fed-batch fermentation (biomass accumulation phase), and methanol fed-batch fermentation (induction phase), which allows direct online-monitoring of fermentation conditions, including broth pH, temperature, DO, anti-foam generation, and feeding of glycerol and methanol. Using this protocol, production of the recombinant β-xylosidase of Lentinula edodes origin in 1000-L scale fermentation can be up to ~900 mg/L or 9.4 mg/g cells (dry cell weight, intracellular expression), with the specific production rate and average specific production of 0.1 mg/g/h and 0.081 mg/g/h, respectively. The methodology described in this protocol can be easily transferred to other systems, and eligible to scale up for a large number of proteins used in either the scientific studies or commercial purposes.
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Convective organization in the Pacific ITCZ: Merging OLR, TOVS, and SSM/I information
NASA Technical Reports Server (NTRS)
Hayes, Patrick M.; Mcguirk, James P.
1993-01-01
One of the most striking features of the planet's long-time average cloudiness is the zonal band of concentrated convection lying near the equator. Large-scale variability of the Intertropical Convergence Zone (ITCZ) has been well documented in studies of the planetary spatial scales and seasonal/annual/interannual temporal cycles of convection. Smaller-scale variability is difficult to study over the tropical oceans for several reasons. Conventional surface and upper-air data are virtually non-existent in some regions; diurnal and annual signals overwhelm fluctuations on other time scales; and analyses of variables such as geopotential and moisture are generally less reliable in the tropics. These problems make the use of satellite data an attractive alternative and the preferred means to study variability of tropical weather systems.
Condition number estimation of preconditioned matrices.
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
NASA Astrophysics Data System (ADS)
Vanclooster, Marnik
2010-05-01
The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.
How much does a tokamak reactor cost?
NASA Astrophysics Data System (ADS)
Freidberg, J.; Cerfon, A.; Ballinger, S.; Barber, J.; Dogra, A.; McCarthy, W.; Milanese, L.; Mouratidis, T.; Redman, W.; Sandberg, A.; Segal, D.; Simpson, R.; Sorensen, C.; Zhou, M.
2017-10-01
The cost of a fusion reactor is of critical importance to its ultimate acceptability as a commercial source of electricity. While there are general rules of thumb for scaling both overnight cost and levelized cost of electricity the corresponding relations are not very accurate or universally agreed upon. We have carried out a series of scaling studies of tokamak reactor costs based on reasonably sophisticated plasma and engineering models. The analysis is largely analytic, requiring only a simple numerical code, thus allowing a very large number of designs. Importantly, the studies are aimed at plasma physicists rather than fusion engineers. The goals are to assess the pros and cons of steady state burning plasma experiments and reactors. One specific set of results discusses the benefits of higher magnetic fields, now possible because of the recent development of high T rare earth superconductors (REBCO); with this goal in mind, we calculate quantitative expressions, including both scaling and multiplicative constants, for cost and major radius as a function of central magnetic field.
NASA Astrophysics Data System (ADS)
Nunes, A.; Ivanov, V. Y.
2014-12-01
Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.
Alignments of Dark Matter Halos with Large-scale Tidal Fields: Mass and Redshift Dependence
NASA Astrophysics Data System (ADS)
Chen, Sijie; Wang, Huiyuan; Mo, H. J.; Shi, Jingjing
2016-07-01
Large-scale tidal fields estimated directly from the distribution of dark matter halos are used to investigate how halo shapes and spin vectors are aligned with the cosmic web. The major, intermediate, and minor axes of halos are aligned with the corresponding tidal axes, and halo spin axes tend to be parallel with the intermediate axes and perpendicular to the major axes of the tidal field. The strengths of these alignments generally increase with halo mass and redshift, but the dependence is only on the peak height, ν \\equiv {δ }{{c}}/σ ({M}{{h}},z). The scaling relations of the alignment strengths with the value of ν indicate that the alignment strengths remain roughly constant when the structures within which the halos reside are still in a quasi-linear regime, but decreases as nonlinear evolution becomes more important. We also calculate the alignments in projection so that our results can be compared directly with observations. Finally, we investigate the alignments of tidal tensors on large scales, and use the results to understand alignments of halo pairs separated at various distances. Our results suggest that the coherent structure of the tidal field is the underlying reason for the alignments of halos and galaxies seen in numerical simulations and in observations.
Stochastic Dynamic Mixed-Integer Programming (SD-MIP)
2015-05-05
stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g
Advanced Fabrication Processes for Superconducting Very Large Scale Integrated Circuits
2015-10-13
transistors. There are several reasons for this gigantic disparity: insufficient funding and lack of profit-driven investments in superconductor ...Inductance of circuit structures for MIT LL superconductor electronics fabrication process with 8 niobium layers,” IEEE Trans. Appl. Supercond., vol...vol. 25, No. 3, 1301704, June 2015. [7] V. Ambegaokar and A. Baratoff, “Tunneling between superconductors ,” Phys. Rev. Lett., vol. 10, no. 11, pp
Uncorrelated Encounter Model of the National Airspace System, Version 2.0
2013-08-19
can exist to certify avoidance systems for operational use. Evaluations typically include flight tests, operational impact studies, and simulation of...appropriate for large-scale air traffic impact studies— for example, examination of sector loading or conflict rates. The focus here includes two types of...between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters of sufficient fidelity in the available data
LAVA: Large scale Automated Vulnerability Addition
2016-05-23
memory copy, e.g., are reasonable attack points. If the goal is to inject divide- by-zero, then arithmetic operations involving division will be...ways. First, it introduces deterministic record and replay , which can be used for iterated and expensive analyses that cannot be performed online... memory . Since our approach records the correspondence between source lines and program basic block execution, it would be just as easy to figure out
Bekiari, Alexandra; Kokaridas, Dimitrios; Sakellariou, Kimon
2006-04-01
In this study were examined associations among physical education teachers' verbal aggressiveness as perceived by students and students' intrinsic motivation and reasons for discipline. The sample consisted of 265 Greek adolescent students who completed four questionnaires, the Verbal Aggressiveness Scale, the Lesson Satisfaction Scale, the Reasons for Discipline Scale, and the Intrinsic Motivation Inventory during physical education classes. Analysis indicated significant positive correlations among students' perceptions of teachers' verbal aggressiveness with pressure/ tension, external reasons, introjected reasons, no reasons, and self-responsibility. Significant negative correlations were noted for students' perceptions of teachers' verbal aggression with lesson satisfaction, enjoyment/interest, competence, effort/importance, intrinsic reasons, and caring. Differences between the two sexes were observed in their perceptions of teachers' verbal aggressiveness, intrinsic motivation, and reasons for discipline. Findings and implications for teachers' type of communication were also discussed and suggestions for research made.
NASA Astrophysics Data System (ADS)
Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.
2013-12-01
Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.
Kong, Xin; Liu, Jianguo; Ren, Lianhai; Song, Minying; Wang, Xiaowei; Ni, Zhe; Nie, Xiaoqin
2015-10-01
Odorous gas emission characteristic along with the successive processes of a typical full-scale food waste (FW) anaerobic digestion plant in China was investigated in September and January. Seasonal variations in pollutant concentration and principal component analysis (PCA) showed markedly different characteristics between the two months. However, the main reason for the seasonal difference at the sorting process differed from the reason for the seasonal difference at other treatment units. Most odorous volatile organic compound (VOC) concentrations tested near an anaerobic digestion tank were similar and low in both months. Odor indices, including odor contribution (OC) and odor activity value (OAV) of various odorants, were further calculated to evaluate the malodor degree and contribution to the nuisance smell of any odorant. Brought about by people's different dietary habits, H2S and sulfocompounds were found to be dominant contributors to the large total OVA in the January test. By contrast, oxygenated organic compounds played an important role on the sum of OVA in September.
NASA Astrophysics Data System (ADS)
Donahue, Megan; Kaplan, J.; Ebert-May, D.; Ording, G.; Melfi, V.; Gilliland, D.; Sikorski, A.; Johnson, N.
2009-01-01
The typical large liberal-arts, tier-one research university requires all of its graduates to achieve some minimal standards of quantitative literacy and scientific reasoning skills. But how do we know what we are doing, as instructors and as a university, is working the way we think it should? At Michigan State University, a cross-disciplinary team of scientists, statisticians, and teacher education experts have begun a large-scale investigation about student mastery of quantitative and scientific skills, beginning with an assessment of 3,000 freshmen before they start their university careers. We will describe the process we used for developing and testing an instrument, for expanding faculty involvement and input on high-level goals. For this limited presentation, we will limit the discussion mainly to the scientific reasoning perspective, but we will briefly mention some intriguing observations regarding quantitative literacy as well. This project represents the beginning of long-term, longitudinal tracking of the progress of students at our institution. We will discuss preliminary results our 2008 assessment of incoming freshman at Michigan State, and where we plan to go from here. We acknowledge local support from the Quality Fund from the Office of the Provost at MSU. We also acknowledge the Center for Assessment at James Madison University and the NSF for their support at the very beginning of our work.
NASA Astrophysics Data System (ADS)
Ruiz Simo, I.; Martinez-Consentino, V. L.; Amaro, J. E.; Ruiz Arriola, E.
2018-06-01
We use a recent scaling analysis of the quasielastic electron scattering data from
Intelligent Interfaces for Mining Large-Scale RNAi-HCS Image Databases
Lin, Chen; Mak, Wayne; Hong, Pengyu; Sepp, Katharine; Perrimon, Norbert
2010-01-01
Recently, High-content screening (HCS) has been combined with RNA interference (RNAi) to become an essential image-based high-throughput method for studying genes and biological networks through RNAi-induced cellular phenotype analyses. However, a genome-wide RNAi-HCS screen typically generates tens of thousands of images, most of which remain uncategorized due to the inadequacies of existing HCS image analysis tools. Until now, it still requires highly trained scientists to browse a prohibitively large RNAi-HCS image database and produce only a handful of qualitative results regarding cellular morphological phenotypes. For this reason we have developed intelligent interfaces to facilitate the application of the HCS technology in biomedical research. Our new interfaces empower biologists with computational power not only to effectively and efficiently explore large-scale RNAi-HCS image databases, but also to apply their knowledge and experience to interactive mining of cellular phenotypes using Content-Based Image Retrieval (CBIR) with Relevance Feedback (RF) techniques. PMID:21278820
Neural encoding of large-scale three-dimensional space-properties and constraints.
Jeffery, Kate J; Wilson, Jonathan J; Casali, Giulio; Hayman, Robin M
2015-01-01
How the brain represents represent large-scale, navigable space has been the topic of intensive investigation for several decades, resulting in the discovery that neurons in a complex network of cortical and subcortical brain regions co-operatively encode distance, direction, place, movement etc. using a variety of different sensory inputs. However, such studies have mainly been conducted in simple laboratory settings in which animals explore small, two-dimensional (i.e., flat) arenas. The real world, by contrast, is complex and three dimensional with hills, valleys, tunnels, branches, and-for species that can swim or fly-large volumetric spaces. Adding an additional dimension to space adds coding challenges, a primary reason for which is that several basic geometric properties are different in three dimensions. This article will explore the consequences of these challenges for the establishment of a functional three-dimensional metric map of space, one of which is that the brains of some species might have evolved to reduce the dimensionality of the representational space and thus sidestep some of these problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seljak, Uroš, E-mail: useljak@berkeley.edu
On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less
Wendelken, Carter; Ferrer, Emilio; Ghetti, Simona; Bailey, Stephen K; Cutting, Laurie; Bunge, Silvia A
2017-08-30
Prior research points to a positive concurrent relationship between reasoning ability and both frontoparietal structural connectivity (SC) as measured by diffusion tensor imaging (Tamnes et al., 2010) and frontoparietal functional connectivity (FC) as measured by fMRI (Cocchi et al., 2014). Further, recent research demonstrates a link between reasoning ability and FC of two brain regions in particular: rostrolateral prefrontal cortex (RLPFC) and the inferior parietal lobe (IPL) (Wendelken et al., 2016). Here, we sought to investigate the concurrent and dynamic, lead-lag relationships among frontoparietal SC, FC, and reasoning ability in humans. To this end, we combined three longitudinal developmental datasets with behavioral and neuroimaging data from 523 male and female participants between 6 and 22 years of age. Cross-sectionally, reasoning ability was most strongly related to FC between RLPFC and IPL in adolescents and adults, but to frontoparietal SC in children. Longitudinal analysis revealed that RLPFC-IPL SC, but not FC, was a positive predictor of future changes in reasoning ability. Moreover, we found that RLPFC-IPL SC at one time point positively predicted future changes in RLPFC-IPL FC, whereas, in contrast, FC did not predict future changes in SC. Our results demonstrate the importance of strong white matter connectivity between RLPFC and IPL during middle childhood for the subsequent development of both robust FC and good reasoning ability. SIGNIFICANCE STATEMENT The human capacity for reasoning develops substantially during childhood and has a profound impact on achievement in school and in cognitively challenging careers. Reasoning ability depends on communication between lateral prefrontal and parietal cortices. Therefore, to understand how this capacity develops, we examined the dynamic relationships over time among white matter tracts connecting frontoparietal cortices (i.e., structural connectivity, SC), coordinated frontoparietal activation (functional connectivity, FC), and reasoning ability in a large longitudinal sample of subjects 6-22 years of age. We found that greater frontoparietal SC in childhood predicts future increases in both FC and reasoning ability, demonstrating the importance of white matter development during childhood for subsequent brain and cognitive functioning. Copyright © 2017 the authors 0270-6474/17/378549-10$15.00/0.
Ferrer, Emilio; Cutting, Laurie
2017-01-01
Prior research points to a positive concurrent relationship between reasoning ability and both frontoparietal structural connectivity (SC) as measured by diffusion tensor imaging (Tamnes et al., 2010) and frontoparietal functional connectivity (FC) as measured by fMRI (Cocchi et al., 2014). Further, recent research demonstrates a link between reasoning ability and FC of two brain regions in particular: rostrolateral prefrontal cortex (RLPFC) and the inferior parietal lobe (IPL) (Wendelken et al., 2016). Here, we sought to investigate the concurrent and dynamic, lead–lag relationships among frontoparietal SC, FC, and reasoning ability in humans. To this end, we combined three longitudinal developmental datasets with behavioral and neuroimaging data from 523 male and female participants between 6 and 22 years of age. Cross-sectionally, reasoning ability was most strongly related to FC between RLPFC and IPL in adolescents and adults, but to frontoparietal SC in children. Longitudinal analysis revealed that RLPFC–IPL SC, but not FC, was a positive predictor of future changes in reasoning ability. Moreover, we found that RLPFC–IPL SC at one time point positively predicted future changes in RLPFC–IPL FC, whereas, in contrast, FC did not predict future changes in SC. Our results demonstrate the importance of strong white matter connectivity between RLPFC and IPL during middle childhood for the subsequent development of both robust FC and good reasoning ability. SIGNIFICANCE STATEMENT The human capacity for reasoning develops substantially during childhood and has a profound impact on achievement in school and in cognitively challenging careers. Reasoning ability depends on communication between lateral prefrontal and parietal cortices. Therefore, to understand how this capacity develops, we examined the dynamic relationships over time among white matter tracts connecting frontoparietal cortices (i.e., structural connectivity, SC), coordinated frontoparietal activation (functional connectivity, FC), and reasoning ability in a large longitudinal sample of subjects 6–22 years of age. We found that greater frontoparietal SC in childhood predicts future increases in both FC and reasoning ability, demonstrating the importance of white matter development during childhood for subsequent brain and cognitive functioning. PMID:28821657
Imaging mouse cerebellum with serial optical coherence scanner (Conference Presentation)
NASA Astrophysics Data System (ADS)
Liu, Chao J.; Williams, Kristen; Orr, Harry; Taner, Akkin
2017-02-01
We present the serial optical coherence scanner (SOCS), which consists of a polarization sensitive optical coherence tomography and a vibratome with associated controls for serial imaging, to visualize the cerebellum and adjacent brainstem of mouse. The cerebellar cortical layers and white matter are distinguished by using intrinsic optical contrasts. Images from serial scans reveal the large-scale anatomy in detail and map the nerve fiber pathways in the cerebellum and adjacent brainstem. The optical system, which has 5.5 μm axial resolution, utilizes a scan lens or a water-immersion microscope objective resulting in 10 μm or 4 μm lateral resolution, respectively. The large-scale brain imaging at high resolution requires an efficient way to collect large datasets. It is important to improve the SOCS system to deal with large-scale and large number of samples in a reasonable time. The imaging and slicing procedure for a section took about 4 minutes due to a low speed of the vibratome blade to maintain slicing quality. SOCS has potential to investigate pathological changes and monitor the effects of therapeutic drugs in cerebellar diseases such as spinocerebellar ataxia 1 (SCA1). The SCA1 is a neurodegenerative disease characterized by atrophy and eventual loss of Purkinje cells from the cerebellar cortex, and the optical contrasts provided by SOCS is being evaluated for biomarkers of the disease.
Validity and Reliability Studies on the Scale of the Reasons for Academic Procrastination
ERIC Educational Resources Information Center
Yesil, Rustu
2012-01-01
The objective of this study is to develop a scale in order to determine the reasons why students delay academic tasks and the levels that they are affected from these reasons. The study group was composed of a total of 447 students from the faculty of education. The KMO value of this scale composed of 43 items collected under six factors was…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Lin, Paul Tinphone
2009-01-01
This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less
More reasons to be straightforward: findings and norms for two scales relevant to social anxiety.
Rodebaugh, Thomas L; Heimberg, Richard G; Brown, Patrick J; Fernandez, Katya C; Blanco, Carlos; Schneier, Franklin R; Liebowitz, Michael R
2011-06-01
The validity of both the Social Interaction Anxiety Scale and Brief Fear of Negative Evaluation scale has been well-supported, yet the scales have a small number of reverse-scored items that may detract from the validity of their total scores. The current study investigates two characteristics of participants that may be associated with compromised validity of these items: higher age and lower levels of education. In community and clinical samples, the validity of each scale's reverse-scored items was moderated by age, years of education, or both. The straightforward items did not show this pattern. To encourage the use of the straightforward items of these scales, we provide normative data from the same samples as well as two large student samples. We contend that although response bias can be a substantial problem, the reverse-scored questions of these scales do not solve that problem and instead decrease overall validity. Copyright © 2011 Elsevier Ltd. All rights reserved.
Gravity waves and the LHC: towards high-scale inflation with low-energy SUSY
NASA Astrophysics Data System (ADS)
He, Temple; Kachru, Shamit; Westphal, Alexander
2010-06-01
It has been argued that rather generic features of string-inspired inflationary theories with low-energy supersymmetry (SUSY) make it difficult to achieve inflation with a Hubble scale H > m 3/2, where m 3/2 is the gravitino mass in the SUSY-breaking vacuum state. We present a class of string-inspired supergravity realizations of chaotic inflation where a simple, dynamical mechanism yields hierarchically small scales of post-inflationary supersymmetry breaking. Within these toy models we can easily achieve small ratios between m 3/2 and the Hubble scale of inflation. This is possible because the expectation value of the superpotential < W> relaxes from large to small values during the course of inflation. However, our toy models do not provide a reasonable fit to cosmological data if one sets the SUSY-breaking scale to m 3/2 ≤ TeV. Our work is a small step towards relieving the apparent tension between high-scale inflation and low-scale supersymmetry breaking in string compactifications.
Alternative Splicing of CHEK2 and Codeletion with NF2 Promote Chromosomal Instability in Meningioma1
Yang, Hong Wei; Kim, Tae-Min; Song, Sydney S; Shrinath, Nihal; Park, Richard; Kalamarides, Michel; Park, Peter J; Black, Peter M; Carroll, Rona S; Johnson, Mark D
2012-01-01
Mutations of the NF2 gene on chromosome 22q are thought to initiate tumorigenesis in nearly 50% of meningiomas, and 22q deletion is the earliest and most frequent large-scale chromosomal abnormality observed in these tumors. In aggressive meningiomas, 22q deletions are generally accompanied by the presence of large-scale segmental abnormalities involving other chromosomes, but the reasons for this association are unknown. We find that large-scale chromosomal alterations accumulate during meningioma progression primarily in tumors harboring 22q deletions, suggesting 22q-associated chromosomal instability. Here we show frequent codeletion of the DNA repair and tumor suppressor gene, CHEK2, in combination with NF2 on chromosome 22q in a majority of aggressive meningiomas. In addition, tumor-specific splicing of CHEK2 in meningioma leads to decreased functional Chk2 protein expression. We show that enforced Chk2 knockdown in meningioma cells decreases DNA repair. Furthermore, Chk2 depletion increases centrosome amplification, thereby promoting chromosomal instability. Taken together, these data indicate that alternative splicing and frequent codeletion of CHEK2 and NF2 contribute to the genomic instability and associated development of aggressive biologic behavior in meningiomas. PMID:22355270
Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Abdol-Hamid, Khaled S.
2005-01-01
Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.
A high-resolution European dataset for hydrologic modeling
NASA Astrophysics Data System (ADS)
Ntegeka, Victor; Salamon, Peter; Gomes, Goncalo; Sint, Hadewij; Lorini, Valerio; Thielen, Jutta
2013-04-01
There is an increasing demand for large scale hydrological models not only in the field of modeling the impact of climate change on water resources but also for disaster risk assessments and flood or drought early warning systems. These large scale models need to be calibrated and verified against large amounts of observations in order to judge their capabilities to predict the future. However, the creation of large scale datasets is challenging for it requires collection, harmonization, and quality checking of large amounts of observations. For this reason, only a limited number of such datasets exist. In this work, we present a pan European, high-resolution gridded dataset of meteorological observations (EFAS-Meteo) which was designed with the aim to drive a large scale hydrological model. Similar European and global gridded datasets already exist, such as the HadGHCND (Caesar et al., 2006), the JRC MARS-STAT database (van der Goot and Orlandi, 2003) and the E-OBS gridded dataset (Haylock et al., 2008). However, none of those provide similarly high spatial resolution and/or a complete set of variables to force a hydrologic model. EFAS-Meteo contains daily maps of precipitation, surface temperature (mean, minimum and maximum), wind speed and vapour pressure at a spatial grid resolution of 5 x 5 km for the time period 1 January 1990 - 31 December 2011. It furthermore contains calculated radiation, which is calculated by using a staggered approach depending on the availability of sunshine duration, cloud cover and minimum and maximum temperature, and evapotranspiration (potential evapotranspiration, bare soil and open water evapotranspiration). The potential evapotranspiration was calculated using the Penman-Monteith equation with the above-mentioned meteorological variables. The dataset was created as part of the development of the European Flood Awareness System (EFAS) and has been continuously updated throughout the last years. The dataset variables are used as inputs to the hydrological calibration and validation of EFAS as well as for establishing long-term discharge "proxy" climatologies which can then in turn be used for statistical analysis to derive return periods or other time series derivatives. In addition, this dataset will be used to assess climatological trends in Europe. Unfortunately, to date no baseline dataset at the European scale exists to test the quality of the herein presented data. Hence, a comparison against other existing datasets can therefore only be an indication of data quality. Due to availability, a comparison was made for precipitation and temperature only, arguably the most important meteorological drivers for hydrologic models. A variety of analyses was undertaken at country scale against data reported to EUROSTAT and E-OBS datasets. The comparison revealed that while the datasets showed overall similar temporal and spatial patterns, there were some differences in magnitudes especially for precipitation. It is not straightforward to define the specific cause for these differences. However, in most cases the comparatively low observation station density appears to be the principal reason for the differences in magnitude.
Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models
NASA Astrophysics Data System (ADS)
Jošt, D.; Škerlavaj, A.; Lipej, A.
2012-11-01
Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.
Condition Number Estimation of Preconditioned Matrices
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager’s method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei’s matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei’s matrix, and matrices generated with the finite element method. PMID:25816331
Towards retrieving critical relative humidity from ground-based remote sensing observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Weverberg, Kwinten; Boutle, Ian; Morcrette, Cyril J.
2016-08-22
Nearly all parameterisations of large-scale cloud require the specification of the critical relative humidity (RHcrit). This is the gridbox-mean relative humidity at which the subgrid fluctuations in temperature and water vapour become so large that part of a subsaturated gridbox becomes saturated and cloud starts to form. Until recently, the lack of high-resolution observations of temperature and moisture variability has hindered a reasonable estimate of the RHcrit from observations. However, with the advent of ground-based measurements from Raman lidar, it becomes possible to obtain long records of temperature and moisture (co-)variances with sub-minute sample rates. Lidar observations are inherently noisymore » and any analysis of higher-order moments will be very dependent on the ability to quantify and remove this noise. We present an exporatory study aimed at understanding whether current noise levels of lidar-retrieved temperature and water vapour are sufficient to obtain a reasonable estimate of the RHcrit. We show that vertical profiles of RHcrit can be derived for a gridbox length of up to about 30 km (120) with an uncertainty of about 4 % (2 %). RHcrit tends to be smallest near the scale height and seems to be fairly insensitive to the horizontal grid spacing at the scales investigated here (30 - 120 km). However, larger sensitivity was found to the vertical grid spacing. As the grid spacing decreases from 400 to 100 m, RHcrit is observed to increase by about 6 %, which is more than the uncertainty in the RHcrit retrievals.« less
Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries
NASA Astrophysics Data System (ADS)
Deiterding, Ralf; Wood, Stephen L.
2016-09-01
Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.
Promoting R & D in photobiological hydrogen production utilizing mariculture-raised cyanobacteria.
Sakurai, Hidehiro; Masukawa, Hajime
2007-01-01
This review article explores the potential of using mariculture-raised cyanobacteria as solar energy converters of hydrogen (H(2)). The exploitation of the sea surface for large-scale renewable energy production and the reasons for selecting the economical, nitrogenase-based systems of cyanobacteria for H(2) production, are described in terms of societal benefits. Reports of cyanobacterial photobiological H(2) production are summarized with respect to specific activity, efficiency of solar energy conversion, and maximum H(2) concentration attainable. The need for further improvements in biological parameters such as low-light saturation properties, sustainability of H(2) production, and so forth, and the means to overcome these difficulties through the identification of promising wild-type strains followed by optimization of the selected strains using genetic engineering are also discussed. Finally, a possible mechanism for the development of economical large-scale mariculture operations in conjunction with international cooperation and social acceptance is outlined.
Attempting to bridge the gap between laboratory and seismic estimates of fracture energy
McGarr, A.; Fletcher, Joe B.; Beeler, N.M.
2004-01-01
To investigate the behavior of the fracture energy associated with expanding the rupture zone of an earthquake, we have used the results of a large-scale, biaxial stick-slip friction experiment to set the parameters of an equivalent dynamic rupture model. This model is determined by matching the fault slip, the static stress drop and the apparent stress. After confirming that the fracture energy associated with this model earthquake is in reasonable agreement with corresponding laboratory values, we can use it to determine fracture energies for earthquakes as functions of stress drop, rupture velocity and fault slip. If we take account of the state of stress at seismogenic depths, the model extrapolation to larger fault slips yields fracture energies that agree with independent estimates by others based on dynamic rupture models for large earthquakes. For fixed stress drop and rupture speed, the fracture energy scales linearly with fault slip.
Source localization in electromyography using the inverse potential problem
NASA Astrophysics Data System (ADS)
van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.
2011-02-01
We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.
Ariew, André
2007-03-01
Charles Darwin, James Clerk Maxwell, and Francis Galton were all aware, by various means, of Aldolphe Quetelet's pioneering work in statistics. Darwin, Maxwell, and Galton all had reason to be interested in Quetelet's work: they were all working on some instance of how large-scale regularities emerge from individual events that vary from one another; all were rejecting the divine interventionistic theories of their contemporaries; and Quetelet's techniques provided them with a way forward. Maxwell and Galton all explicitly endorse Quetelet's techniques in their work; Darwin does not incorporate any of the statistical ideas of Quetelet, although natural selection post-twentieth century synthesis has. Why not Darwin? My answer is that by the time Darwin encountered Malthus's law of excess reproduction he had all he needed to answer about large scale regularities in extinctions, speciation, and adaptation. He didn't need Quetelet.
77 FR 9700 - Utility Scale Wind Towers From China and Vietnam
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-17
...)] Utility Scale Wind Towers From China and Vietnam Determinations On the basis of the record \\1\\ developed... threatened with material injury by reason of imports from China of utility scale wind towers, provided for in... with material injury by reason of imports from Vietnam of utility scale wind towers, provided for in...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leonard, Philip; Francois, Elizabeth Green
During this project we investigated a number of energetic materials both old and new and determined that most of them were unsuitable due to safety or sensitivity reasons. Unsuccessful coformulants include TNAZ and BNFF for volatility reasons, and DAAF due to thermal compatibility issues. The powerful explosive HMX became a focus of the work in later stages as it conferred excellent power while being commonly available in well-regulated particle size lots and is chemically compatible in the melt with many coformulants. Ultimately three preferred formulations emerged from this work: a formulation tested on large scale by ARDEC involving PrNQ andmore » HMX; a formulation tested at ARDEC and LANL using a nitrate salt eutectic and HMX; a formulation tested at LANL using LLM-201 and HMX.« less
Large-Scale Weather Disturbances in Mars’ Southern Extratropics
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, Melinda A.
2015-11-01
Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.
Daws, Richard E.; Hampshire, Adam
2017-01-01
It is well established that religiosity correlates inversely with intelligence. A prominent hypothesis states that this correlation reflects behavioral biases toward intuitive problem solving, which causes errors when intuition conflicts with reasoning. We tested predictions of this hypothesis by analyzing data from two large-scale Internet-cohort studies (combined N = 63,235). We report that atheists surpass religious individuals in terms of reasoning but not working-memory performance. The religiosity effect is robust across sociodemographic factors including age, education and country of origin. It varies significantly across religions and this co-occurs with substantial cross-group differences in religious dogmatism. Critically, the religiosity effect is strongest for tasks that explicitly manipulate conflict; more specifically, atheists outperform the most dogmatic religious group by a substantial margin (0.6 standard deviations) during a color-word conflict task but not during a challenging matrix-reasoning task. These results support the hypothesis that behavioral biases rather than impaired general intelligence underlie the religiosity effect. PMID:29312057
NASA Astrophysics Data System (ADS)
Zorita, E.
2009-12-01
One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.
Combined climate and carbon-cycle effects of large-scale deforestation
Bala, G.; Caldeira, K.; Wickett, M.; Phillips, T. J.; Lobell, D. B.; Delire, C.; Mirin, A.
2007-01-01
The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO2 to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These simulations were performed by using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, because the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. Although these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate. PMID:17420463
Combined climate and carbon-cycle effects of large-scale deforestation.
Bala, G; Caldeira, K; Wickett, M; Phillips, T J; Lobell, D B; Delire, C; Mirin, A
2007-04-17
The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO(2) to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These simulations were performed by using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, because the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. Although these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.
Combined Climate and Carbon-Cycle Effects of Large-Scale Deforestation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bala, G; Caldeira, K; Wickett, M
2006-10-17
The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO{sub 2} to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These are the first such simulations performed using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has amore » net cooling influence on Earth's climate, since the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. While these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.« less
Software environment for implementing engineering applications on MIMD computers
NASA Technical Reports Server (NTRS)
Lopez, L. A.; Valimohamed, K. A.; Schiff, S.
1990-01-01
In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.
Humidity Distributions in Multilayered Walls of High-rise Buildings
NASA Astrophysics Data System (ADS)
Gamayunova, Olga; Musorina, Tatiana; Ishkov, Alexander
2018-03-01
The limitation of free territories in large cities is the main reason for the active development of high-rise construction. Given the large-scale projects of high-rise buildings in recent years in Russia and abroad and their huge energy consumption, one of the fundamental principles in the design and reconstruction is the use of energy-efficient technologies. The main heat loss in buildings occurs through enclosing structures. However, not always the heat-resistant wall will be energy-efficient and dry at the same time (perhaps waterlogging). Temperature and humidity distributions in multilayer walls were studied in the paper, and the interrelation of other thermophysical characteristics was analyzed.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Fan-out Estimation in Spin-based Quantum Computer Scale-up.
Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R
2017-10-17
Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.
Large eddy simulation of turbine wakes using higher-order methods
NASA Astrophysics Data System (ADS)
Deskos, Georgios; Laizet, Sylvain; Piggott, Matthew D.; Sherwin, Spencer
2017-11-01
Large eddy simulations (LES) of a horizontal-axis turbine wake are presented using the well-known actuator line (AL) model. The fluid flow is resolved by employing higher-order numerical schemes on a 3D Cartesian mesh combined with a 2D Domain Decomposition strategy for an efficient use of supercomputers. In order to simulate flows at relatively high Reynolds numbers for a reasonable computational cost, a novel strategy is used to introduce controlled numerical dissipation to a selected range of small scales. The idea is to mimic the contribution of the unresolved small-scales by imposing a targeted numerical dissipation at small scales when evaluating the viscous term of the Navier-Stokes equations. The numerical technique is shown to behave similarly to the traditional eddy viscosity sub-filter scale models such as the classic or the dynamic Smagorinsky models. The results from the simulations are compared to experimental data for a Reynolds number scaled by the diameter equal to ReD =1,000,000 and both the time-averaged stream wise velocity and turbulent kinetic energy (TKE) are showing a good overall agreement. At the end, suggestions for the amount of numerical dissipation required by our approach are made for the particular case of horizontal-axis turbine wakes.
Cognitive Abilities Explain Wording Effects in the Rosenberg Self-Esteem Scale.
Gnambs, Timo; Schroeders, Ulrich
2017-12-01
There is consensus that the 10 items of the Rosenberg Self-Esteem Scale (RSES) reflect wording effects resulting from positively and negatively keyed items. The present study examined the effects of cognitive abilities on the factor structure of the RSES with a novel, nonparametric latent variable technique called local structural equation models. In a nationally representative German large-scale assessment including 12,437 students competing measurement models for the RSES were compared: a bifactor model with a common factor and a specific factor for all negatively worded items had an optimal fit. Local structural equation models showed that the unidimensionality of the scale increased with higher levels of reading competence and reasoning, while the proportion of variance attributed to the negatively keyed items declined. Wording effects on the factor structure of the RSES seem to represent a response style artifact associated with cognitive abilities.
Elosua, Paula; Mujika, Josu
2015-10-13
The Reasoning Test Battery (BPR) is an instrument built on theories of the hierarchical organization of cognitive abilities and therefore consists of different tasks related with abstract, numerical, verbal, practical, spatial and mechanical reasoning. It was originally created in Belgium and later adapted to Portuguese. There are three forms of the battery consisting of different items and scales which cover an age range from 9 to 22. This paper focuses on the adaptation of the BPR to Spanish, and analyzes different aspects of its internal structure: (a) exploratory item factor analysis was applied to assess the presence of a dominant factor for each partial scale; (b) the general underlined model was evaluated through confirmatory factor analysis, and (c) factorial invariance across gender was studied. The sample consisted of 2624 Spanish students. The results concluded the presence of a general factor beyond the scales, with equivalent values for men and women, and gender differences in the factorial structure which affect the numerical reasoning, abstract reasoning and mechanical reasoning scales.
Newland, Jamee; Newman, Christy; Treloar, Carla
2016-08-01
In Australia, sterile needles and syringes are distributed to people who inject drugs (PWID) through formal services for the purposes of preventing blood borne viruses (BBV). Peer distribution involves people acquiring needles from formal services and redistributing them to others. This paper investigates the dynamics of the distribution of sterile injecting equipment among networks of people who inject drugs in four sites in New South Wales (NSW), Australia. Qualitative data exploring the practice of peer distribution were collected through in-depth, semi-structured interviews and participatory social network mapping. These interviews explored injecting equipment demand, access to services, relationship pathways through which peer distribution occurred, an estimate of the size of the different peer distribution roles and participants' understanding of the illegality of peer distribution in NSW. Data were collected from 32 participants, and 31 (98%) reported participating in peer distribution in the months prior to interview. Of those 31 participants, five reported large-scale formal distribution, with an estimated volume of 34,970 needles and syringes annually. Twenty-two participated in reciprocal exchange, where equipment was distributed and received on an informal basis that appeared dependent on context and circumstance and four participants reported recipient peer distribution as their only access to sterile injecting equipment. Most (n=27) were unaware that it was illegal to distribute injecting equipment to their peers. Peer distribution was almost ubiquitous amongst the PWID participating in the study, and although five participants reported taking part in the highly organised, large-scale distribution of injecting equipment for altruistic reasons, peer distribution was more commonly reported to take place in small networks of friends and/or partners for reasons of convenience. The law regarding the illegality of peer distribution needs to change so that NSPs can capitalise on peer distribution to increase the options available to PWID and to acknowledge PWID as essential harm reduction agents in the prevention of BBVs. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kurucz, Charles N.; Waite, Thomas D.; Otaño, Suzana E.; Cooper, William J.; Nickelsen, Michael G.
2002-11-01
The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater.
Soil organic carbon - a large scale paired catchment assessment
NASA Astrophysics Data System (ADS)
Kunkel, V.; Hancock, G. R.; Wells, T.
2016-12-01
Soil organic carbon (SOC) concentration can vary both spatially and temporally driven by differences in soil properties, topography and climate. However most studies have focused on point scale data sets with a paucity of studies examining larger scale catchments. Here we examine the spatial and temporal distribution of SOC for two large catchments. The Krui (575 km2) and Merriwa River (675km2) catchments (New South Wales, Australia). Both have similar shape, soils, topography and orientation. We show that SOC distribution is very similar for both catchments and that elevation (and associated increase in soil moisture) is a major influence on SOC. We also show that there is little change in SOC from the initial assessment in 2006 to 2015 despite a major drought from 2003 to 2010 and extreme rainfall events in 2007 and 2010 -therefore SOC concentration appears robust. However, we found significant relationships between erosion and deposition patterns (as quantified using 137Cs) and SOC for both catchments again demonstrating a strong geomorphic relationship. Vegetation across the catchments was assessed using remote sensing (Landsat and MODIS). Vegetation patterns were temporally consistent with above ground biomass increasing with elevation. SOC could be predicted using both these low and high resolution remote sensing platforms. Results indicate that, although moderate resolution (250 m) allows for reasonable prediction of the spatial distribution of SOC, the higher resolution (30 m) improved the strength of the SOC-NDVI relationship. The relationship between SOC and 137Cs, as a surrogate for the erosion and deposition of SOC, suggested that sediment transport and deposition influences the distribution of SOC within the catchment. The findings demonstrate that over the large catchment scale and at the decadal time scale that SOC is relatively constant and can largely be predicted by topography.
Large-Scale Traveling Weather Systems in Mars’ Southern Extratropics
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, Melinda A.
2017-10-01
Between late fall and early spring, Mars’ middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.
Large-Scale Traveling Weather Systems in Mars Southern Extratropics
NASA Technical Reports Server (NTRS)
Hollingsworth, Jeffery L.; Kahre, Melinda A.
2017-01-01
Between late fall and early spring, Mars' middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.
Supernova explosions in magnetized, primordial dark matter haloes
NASA Astrophysics Data System (ADS)
Seifried, D.; Banerjee, R.; Schleicher, D.
2014-05-01
The first supernova explosions are potentially relevant sources for the production of the first large-scale magnetic fields. For this reason, we present a set of high-resolution simulations studying the effect of supernova explosions on magnetized, primordial haloes. We focus on the evolution of an initially small-scale magnetic field formed during the collapse of the halo. We vary the degree of magnetization, the halo mass, and the amount of explosion energy in order to account for expected variations as well as to infer systematical dependences of the results on initial conditions. Our simulations suggest that core collapse supernovae with an explosion energy of 1051 erg and more violent pair instability supernovae with 1053 erg are able to disrupt haloes with masses up to about 106 and 107 M⊙, respectively. The peak of the magnetic field spectra shows a continuous shift towards smaller k-values, i.e. larger length scales, over time reaching values as low as k = 4. On small scales, the magnetic energy decreases at the cost of the energy on large scales resulting in a well-ordered magnetic field with a strength up to ˜10-8 G depending on the initial conditions. The coherence length of the magnetic field inferred from the spectra reaches values up to 250 pc in agreement with those obtained from autocorrelation functions. We find the coherence length to be as large as 50 per cent of the radius of the supernova bubble. Extrapolating this relation to later stages, we suggest that significantly strong magnetic fields with coherence lengths as large as 1.5 kpc could be created. We discuss possible implications of our results on processes like recollapse of the halo, first galaxy formation, and the magnetization of the intergalactic medium.
Wakefield, Claire E; Ratnayake, Paboda; Meiser, Bettina; Suthers, Graeme; Price, Melanie A; Duffy, Jessica; Tucker, Kathy
2011-06-01
Despite proven benefits, the uptake of genetic counseling and testing by at-risk family members of BRCA1 and BRCA2 mutation carriers remains low. This study aimed to examine at-risk individuals' reported reasons for and against familial cancer clinic (FCC) attendance and genetic testing. Thirty-nine telephone interviews were conducted with relatives of high-risk mutation carriers, 23% (n = 9) of whom had not previously attended an FCC. Interview responses were analyzed using the frameworks of Miles and Huberman. The reasons most commonly reported for FCC attendance were for clarification of risk status and to gain access to testing. While disinterest in testing was one reason for FCC nonattendance, several individuals were unaware of their risk (n = 3) or their eligibility to attend an FCC (n = 2), despite being notified of their risk status through their participation in a large-scale research project. Individuals' reasons for undergoing testing were in line with that reported elsewhere; however, concerns about discrimination and insurance were not reported in nontestees. Current guidelines regarding notifying individuals discovered to be at increased risk in a research, rather than clinical setting, take a largely nondirective approach. However, this study demonstrates that individuals who receive a single letter notifying them of their risk may not understand/value the information they receive.
Meta-analysis on Macropore Flow Velocity in Soils
NASA Astrophysics Data System (ADS)
Liu, D.; Gao, M.; Li, H. Y.; Chen, X.; Leung, L. R.
2017-12-01
Macropore flow is ubiquitous in the soils and an important hydrologic process that is not well explained using traditional hydrologic theories. Macropore Flow Velocity (MFV) is an important parameter used to describe macropore flow and quantify its effects on runoff generation and solute transport. However, the dominant factors controlling MFV are still poorly understood and the typical ranges of MFV measured at the field are not defined clearly. To address these issues, we conducted a meta-analysis based on a database created from 246 experiments on MFV collected from 76 journal articles. For a fair comparison, a conceptually unified definition of MFV is introduced to convert the MFV measured with different approaches and at various scales including soil core, field, trench or hillslope scales. The potential controlling factors of MFV considered include scale, travel distance, hydrologic conditions, site factors, macropore morphologies, soil texture, and land use. The results show that MFV is about 2 3 orders of magnitude larger than the corresponding values of saturated hydraulic conductivity. MFV is much larger at the trench and hillslope scale than at the field profile and soil core scales and shows a significant positive correlation with the travel distance. Generally, higher irrigation intensity tends to trigger faster MFV, especially at field profile scale, where MFV and irrigation intensity have significant positive correlation. At the trench and hillslope scale, the presence of large macropores (diameter>10 mm) is a key factor determining MFV. The geometric mean of MFV for sites with large macropores was found to be about 8 times larger than those without large macropores. For sites with large macropores, MFV increases with the macropore diameter. However, no noticeable difference in MFV has been observed among different soil texture and land use. Comparing the existing equations to describe MFV, the Poiseuille equation significantly overestimated the observed values, while the Manning-type equations generate reasonable values. The insights from this study will shed light on future field campaigns and modeling of macropore flow.
Segal, Daniel L; Gottschling, Juliana; Marty, Meghan; Meyer, William J; Coolidge, Frederick L
2015-01-01
Suicide among older adults is a major public health problem in the USA. In our recent study, we examined relationships between the 10 standard DSM-5 personality disorders (PDs) and suicidal ideation, and found that the PD dimensions explained a majority (55%) of the variance in suicidal ideation. To extend this line of research, the purpose of the present follow-up study was to explore relationships between the four PDs that previously were included in prior versions of the DSM (depressive, passive-aggressive, sadistic, and self-defeating) with suicidal ideation and reasons for living. Community-dwelling older adults (N = 109; age range = 60-95 years; 61% women; 88% European-American) completed anonymously the Coolidge Axis II Inventory, the Reasons for Living Inventory (RFL), and the Geriatric Suicide Ideation Scale (GSIS). Correlational analyses revealed that simple relationships between PD scales with GSIS subscales were generally stronger than with RFL subscales. Regarding GSIS subscales, all four PD scales had medium-to-large positive relationships, with the exception of sadistic PD traits, which was unrelated to the death ideation subscale. Multiple regression analyses showed that the amount of explained variance for the GSIS (48%) was higher than for the RFL (11%), and this finding was attributable to the high predictive power of depressive PD. These findings suggest that depressive PD features are strongly related to increased suicidal thinking and lowered resilience to suicide among older adults. Assessment of depressive PD features should also be especially included in the assessment of later-life suicidal risk.
Kang, Bongmun; Yoon, Ho-Sung
2015-02-01
Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation. Copyright © 2014 Elsevier Inc. All rights reserved.
The Seasonal Predictability of Extreme Wind Events in the Southwest United States
NASA Astrophysics Data System (ADS)
Seastrand, Simona Renee
Extreme wind events are a common phenomenon in the Southwest United States. Entities such as the United States Air Force (USAF) find the Southwest appealing for many reasons, primarily for the an expansive, unpopulated, and electronically unpolluted space for large-scale training and testing. However, wind events can cause hazards for the USAF including: surface wind gusts can impact the take-off and landing of all aircraft, can tip the airframes of large wing-surface aircraft during the performance of maneuvers close to the ground, and can even impact weapons systems. This dissertation is comprised of three sections intended to further our knowledge and understanding of wind events in the Southwest. The first section builds a climatology of wind events for seven locations in the Southwest during the twelve 3-month seasons of the year. The first section further examines the wind events in relation to terrain and the large-scale flow of the atmosphere. The second section builds upon the first by taking the wind events and generating mid-level composites for each of the twelve 3-month seasons. In the third section, teleconnections identified as consistent with the large-scale circulation in the second paper were used as predictor variables to build a Poisson regression model for each of the twelve 3-month seasons. The purpose of this research is to increase our understanding of the climatology of extreme wind events, increase our understanding of how the large-scale circulation influences extreme wind events, and create a model to enhance predictability of extreme wind events in the Southwest. Knowledge from this paper will help protect personnel and property associated with not only the USAF, but all those in the Southwest.
Ethical Failure and Its Operational Cost
2011-12-01
44 Appendix 1: The Kohlberg Scale of Moral Development...has a significant impact on the choices individuals make.50 Psychologist Lawrence Kohlberg looked extensively into moral reasoning and developed a...scale to discern between the different levels of moral reasoning found in individuals. The Kohlberg Scale of Moral Development51 identifies three
Effects of Eddy Viscosity on Time Correlations in Large Eddy Simulation
NASA Technical Reports Server (NTRS)
He, Guowei; Rubinstein, R.; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Subgrid-scale (SGS) models for large. eddy simulation (LES) have generally been evaluated by their ability to predict single-time statistics of turbulent flows such as kinetic energy and Reynolds stresses. Recent application- of large eddy simulation to the evaluation of sound sources in turbulent flows, a problem in which time, correlations determine the frequency distribution of acoustic radiation, suggest that subgrid models should also be evaluated by their ability to predict time correlations in turbulent flows. This paper compares the two-point, two-time Eulerian velocity correlation evaluated from direct numerical simulation (DNS) with that evaluated from LES, using a spectral eddy viscosity, for isotropic homogeneous turbulence. It is found that the LES fields are too coherent, in the sense that their time correlations decay more slowly than the corresponding time. correlations in the DNS fields. This observation is confirmed by theoretical estimates of time correlations using the Taylor expansion technique. Tile reason for the slower decay is that the eddy viscosity does not include the random backscatter, which decorrelates fluid motion at large scales. An effective eddy viscosity associated with time correlations is formulated, to which the eddy viscosity associated with energy transfer is a leading order approximation.
NASA Astrophysics Data System (ADS)
Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan
2005-04-01
The reasons for biases in regional climate simulations were investigated in an attempt to discern whether they arise from deficiencies in the model parameterizations or are due to dynamical problems. Using the Regional Atmospheric Modeling System (RAMS) forced by the National Centers for Environmental Prediction-National Center for Atmospheric Research reanalysis, the detailed climate over North America at 50-km resolution for June 2000 was simulated. First, the RAMS equations were modified to make them applicable to a large region, and its turbulence parameterization was corrected. The initial simulations showed large biases in the location of precipitation patterns and surface air temperatures. By implementing higher-resolution soil data, soil moisture and soil temperature initialization, and corrections to the Kain-Fritch convective scheme, the temperature biases and precipitation amount errors could be removed, but the precipitation location errors remained. The precipitation location biases could only be improved by implementing spectral nudging of the large-scale (wavelength of 2500 km) dynamics in RAMS. This corrected for circulation errors produced by interactions and reflection of the internal domain dynamics with the lateral boundaries where the model was forced by the reanalysis.
ERIC Educational Resources Information Center
Kikas, Eve; Peets, Katlin; Tropp, Kristiina; Hinn, Maris
2009-01-01
The purpose of the present study was to examine the impact of sex, verbal reasoning, and normative beliefs on direct and indirect forms of aggression. Three scales from the Peer Estimated Conflict Behavior Questionnaire, Verbal Reasoning tests, and an extended version of Normative Beliefs About Aggression Scale were administered to 663 Estonian…
Measuring coral reef decline through meta-analyses
Côté, I.M; Gill, J.A; Gardner, T.A; Watkinson, A.R
2005-01-01
Coral reef ecosystems are in decline worldwide, owing to a variety of anthropogenic and natural causes. One of the most obvious signals of reef degradation is a reduction in live coral cover. Past and current rates of loss of coral are known for many individual reefs; however, until recently, no large-scale estimate was available. In this paper, we show how meta-analysis can be used to integrate existing small-scale estimates of change in coral and macroalgal cover, derived from in situ surveys of reefs, to generate a robust assessment of long-term patterns of large-scale ecological change. Using a large dataset from Caribbean reefs, we examine the possible biases inherent in meta-analytical studies and the sensitivity of the method to patchiness in data availability. Despite the fact that our meta-analysis included studies that used a variety of sampling methods, the regional estimate of change in coral cover we obtained is similar to that generated by a standardized survey programme that was implemented in 1991 in the Caribbean. We argue that for habitat types that are regularly and reasonably well surveyed in the course of ecological or conservation research, meta-analysis offers a cost-effective and rapid method for generating robust estimates of past and current states. PMID:15814352
The combustion behavior of large scale lithium titanate battery
Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua
2015-01-01
Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064
Lebon, G S Bruno; Tzanakis, I; Djambazov, G; Pericleous, K; Eskin, D G
2017-07-01
To address difficulties in treating large volumes of liquid metal with ultrasound, a fundamental study of acoustic cavitation in liquid aluminium, expressed in an experimentally validated numerical model, is presented in this paper. To improve the understanding of the cavitation process, a non-linear acoustic model is validated against reference water pressure measurements from acoustic waves produced by an immersed horn. A high-order method is used to discretize the wave equation in both space and time. These discretized equations are coupled to the Rayleigh-Plesset equation using two different time scales to couple the bubble and flow scales, resulting in a stable, fast, and reasonably accurate method for the prediction of acoustic pressures in cavitating liquids. This method is then applied to the context of treatment of liquid aluminium, where it predicts that the most intense cavitation activity is localised below the vibrating horn and estimates the acoustic decay below the sonotrode with reasonable qualitative agreement with experimental data. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C
2016-08-01
The factor structure of the 16 Primary and Secondary subtests of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample was examined with exploratory factor analytic methods (EFA) not included in the WISC-V Technical and Interpretive Manual (Wechsler, 2014b). Factor extraction criteria suggested 1 to 4 factors and results favored 4 first-order factors. When this structure was transformed with the Schmid and Leiman (1957) orthogonalization procedure, the hierarchical g-factor accounted for large portions of total and common variance while the 4 first-order factors accounted for small portions of total and common variance; rendering interpretation at the factor index level less appropriate. Although the publisher favored a 5-factor model where the Perceptual Reasoning factor was split into separate Visual Spatial and Fluid Reasoning dimensions, no evidence for 5 factors was found. It was concluded that the WISC-V provides strong measurement of general intelligence and clinical interpretation should be primarily, if not exclusively, at that level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
Digital geomorphological landslide hazard mapping of the Alpago area, Italy
NASA Astrophysics Data System (ADS)
van Westen, Cees J.; Soeters, Rob; Sijmons, Koert
Large-scale geomorphological maps of mountainous areas are traditionally made using complex symbol-based legends. They can serve as excellent "geomorphological databases", from which an experienced geomorphologist can extract a large amount of information for hazard mapping. However, these maps are not designed to be used in combination with a GIS, due to their complex cartographic structure. In this paper, two methods are presented for digital geomorphological mapping at large scales using GIS and digital cartographic software. The methods are applied to an area with a complex geomorphological setting on the Borsoia catchment, located in the Alpago region, near Belluno in the Italian Alps. The GIS database set-up is presented with an overview of the data layers that have been generated and how they are interrelated. The GIS database was also converted into a paper map, using a digital cartographic package. The resulting largescale geomorphological hazard map is attached. The resulting GIS database and cartographic product can be used to analyse the hazard type and hazard degree for each polygon, and to find the reasons for the hazard classification.
A prototype automatic phase compensation module
NASA Technical Reports Server (NTRS)
Terry, John D.
1992-01-01
The growing demands for high gain and accurate satellite communication systems will necessitate the utilization of large reflector systems. One area of concern of reflector based satellite communication is large scale surface deformations due to thermal effects. These distortions, when present, can degrade the performance of the reflector system appreciable. This performance degradation is manifested by a decrease in peak gain, and increase in sidelobe level, and pointing errors. It is essential to compensate for these distortion effects and to maintain the required system performance in the operating space environment. For this reason the development of a technique to offset the degradation effects is highly desirable. Currently, most research is direct at developing better material for the reflector. These materials have a lower coefficient of linear expansion thereby reducing the surface errors. Alternatively, one can minimize the distortion effects of these large scale errors by adaptive phased array compensation. Adaptive phased array techniques have been studied extensively at NASA and elsewhere. Presented in this paper is a prototype automatic phase compensation module designed and built at NASA Lewis Research Center which is the first stage of development for an adaptive array compensation module.
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Summer circulation in the Mexican tropical Pacific
NASA Astrophysics Data System (ADS)
Trasviña, A.; Barton, E. D.
2008-05-01
The main components of large-scale circulation of the eastern tropical Pacific were identified in the mid 20th century, but the details of the circulation at length scales of 10 2 km or less, the mesoscale field, are less well known particularly during summer. The winter circulation is characterized by large mesoscale eddies generated by intense cross-shore wind pulses. These eddies propagate offshore to provide an important source of mesoscale variability for the eastern tropical Pacific. The summer circulation has not commanded similar attention, the main reason being that the frequent generation of hurricanes in the area renders in situ observations difficult. Before the experiment presented here, the large-scale summer circulation of the Gulf of Tehuantepec was thought to be dominated by a poleward flow along the coast. A drifter-deployment experiment carried out in June 2000, supported by satellite altimetry and wind data, was designed to characterize this hypothesized Costa Rica Coastal Current. We present a detailed comparison between altimetry-estimated geostrophic and in situ currents estimated from drifters. Contrary to expectation, no evidence of a coherent poleward coastal flow across the gulf was found. During the 10-week period of observations, we documented a recurrent pattern of circulation within 500 km of shore, forced by a combination of local winds and the regional-scale flow. Instead of the Costa Rica Coastal Current, we found a summer eddy field capable of influencing large areas of the eastern tropical Pacific. Even in summer, the cross-isthmus wind jet is capable of inducing eddy formation.
Functional reasoning in diagnostic problem solving
NASA Technical Reports Server (NTRS)
Sticklen, Jon; Bond, W. E.; Stclair, D. C.
1988-01-01
This work is one facet of an integrated approach to diagnostic problem solving for aircraft and space systems currently under development. The authors are applying a method of modeling and reasoning about deep knowledge based on a functional viewpoint. The approach recognizes a level of device understanding which is intermediate between a compiled level of typical Expert Systems, and a deep level at which large-scale device behavior is derived from known properties of device structure and component behavior. At this intermediate functional level, a device is modeled in three steps. First, a component decomposition of the device is defined. Second, the functionality of each device/subdevice is abstractly identified. Third, the state sequences which implement each function are specified. Given a functional representation and a set of initial conditions, the functional reasoner acts as a consequence finder. The output of the consequence finder can be utilized in diagnostic problem solving. The paper also discussed ways in which this functional approach may find application in the aerospace field.
Imai, Takeshi; Hayakawa, Masayo; Ohe, Kazuhiko
2013-01-01
Prediction of synergistic or antagonistic effects of drug-drug interaction (DDI) in vivo has been of considerable interest over the years. Formal representation of pharmacological knowledge such as ontology is indispensable for machine reasoning of possible DDIs. However, current pharmacology knowledge bases are not sufficient to provide formal representation of DDI information. With this background, this paper presents: (1) a description framework of pharmacodynamics ontology; and (2) a methodology to utilize pharmacodynamics ontology to detect different types of possible DDI pairs with supporting information such as underlying pharmacodynamics mechanisms. We also evaluated our methodology in the field of drugs related to noradrenaline signal transduction process and 11 different types of possible DDI pairs were detected. The main features of our methodology are the explanation capability of the reason for possible DDIs and the distinguishability of different types of DDIs. These features will not only be useful for providing supporting information to prescribers, but also for large-scale monitoring of drug safety.
Dispersion and Cluster Scales in the Ocean
NASA Astrophysics Data System (ADS)
Kirwan, A. D., Jr.; Chang, H.; Huntley, H.; Carlson, D. F.; Mensa, J. A.; Poje, A. C.; Fox-Kemper, B.
2017-12-01
Ocean flow space scales range from centimeters to thousands of kilometers. Because of their large Reynolds number these flows are considered turbulent. However, because of rotation and stratification constraints they do not conform to classical turbulence scaling theory. Mesoscale and large-scale motions are well described by geostrophic or "2D turbulence" theory, however extending this theory to submesoscales has proved to be problematic. One obvious reason is the difficulty in obtaining reliable data over many orders of magnitude of spatial scales in an ocean environment. The goal of this presentation is to provide a preliminary synopsis of two recent experiments that overcame these obstacles. The first experiment, the Grand LAgrangian Deployment (GLAD) was conducted during July 2012 in the eastern half of the Gulf of Mexico. Here approximately 300 GPS-tracked drifters were deployed with the primary goal to determine whether the relative dispersion of an initially densely clustered array was driven by processes acting at local pair separation scales or by straining imposed by mesoscale motions. The second experiment was a component of the LAgrangian Submesoscale Experiment (LASER) conducted during the winter of 2016. Here thousands of bamboo plates were tracked optically from an Aerostat. Together these two deployments provided an unprecedented data set on dispersion and clustering processes from 1 to 106 meter scales. Calculations of statistics such as two point separations, structure functions, and scale dependent relative diffusivities showed: inverse energy cascade as expected for scales above 10 km, a forward energy cascade at scales below 10 km with a possible energy input at Langmuir circulation scales. We also find evidence from structure function calculations for surface flow convergence at scales less than 10 km that account for material clustering at the ocean surface.
[Colombian migration to the Venezuelan agrarian sector: a binational context].
Mora, J; Gomez, A
1980-01-01
The authors attempt to determine the reasons for the chronic national labor shortage in the Venezuelan agrarian sector and for the large-scale emigration of Colombians to work in Venezuelan agriculture. The income of agricultural wage earners and the conditions of labor force reproduction in Venezuela are discussed as factors contributing to the labor shortage. With reference to Colombia, the rapid growth of international commerce and the policy of limiting wages are suggested as factors which contribute to emigration
Numerical simulation of cloud and precipitation structure during GALE IOP-2
NASA Technical Reports Server (NTRS)
Robertson, F. R.; Perkey, D. J.; Seablom, M. S.
1988-01-01
A regional scale model, LAMPS (Limited Area Mesoscale Prediction System), is used to investigate cloud and precipitation structure that accompanied a short wave system during a portion of GALE IOP-2. A comparison of satellite imagery and model fields indicates that much of the large mesoscale organization of condensation has been captured by the simulation. In addition to reproducing a realistic phasing of two baroclinic zones associated with a split cold front, a reasonable simulation of the gross mesoscale cloud distribution has been achieved.
Ballerina - pirouettes in search of gamma bursts
NASA Astrophysics Data System (ADS)
Brandt, S.; Lund, N.; Pedersen, H.; Hjorth, J.; BALLERINA Collaboration
1999-09-01
The cosmological origin of gamma ray bursts has now been established with reasonable certainty. Many more bursts will need to be studied to establish the typical distance scale, and to map out the large diversity in properties which have been indicated by the first handful of events. We are proposing Ballerina, a small satellite to provide accurate positions and new data on the gamma-ray bursts. We anticipate a detection rate an order of magnitude larger than obtained from Beppo-SAX.
The collaborative historical African rainfall model: description and evaluation
Funk, Christopher C.; Michaelsen, Joel C.; Verdin, James P.; Artan, Guleid A.; Husak, Gregory; Senay, Gabriel B.; Gadain, Hussein; Magadazire, Tamuka
2003-01-01
In Africa the variability of rainfall in space and time is high, and the general availability of historical gauge data is low. This makes many food security and hydrologic preparedness activities difficult. In order to help overcome this limitation, we have created the Collaborative Historical African Rainfall Model (CHARM). CHARM combines three sources of information: climatologically aided interpolated (CAI) rainfall grids (monthly/0.5° ), National Centers for Environmental Prediction reanalysis precipitation fields (daily/1.875° ) and orographic enhancement estimates (daily/0.1° ). The first set of weights scales the daily reanalysis precipitation fields to match the gridded CAI monthly rainfall time series. This produces data with a daily/0.5° resolution. A diagnostic model of orographic precipitation, VDELB—based on the dot-product of the surface wind V and terrain gradient (DEL) and atmospheric buoyancy B—is then used to estimate the precipitation enhancement produced by complex terrain. Although the data are produced on 0.1° grids to facilitate integration with satellite-based rainfall estimates, the ‘true’ resolution of the data will be less than this value, and varies with station density, topography, and precipitation dynamics. The CHARM is best suited, therefore, to applications that integrate rainfall or rainfall-driven model results over large regions. The CHARM time series is compared with three independent datasets: dekadal satellite-based rainfall estimates across the continent, dekadal interpolated gauge data in Mali, and daily interpolated gauge data in western Kenya. These comparisons suggest reasonable accuracies (standard errors of about half a standard deviation) when data are aggregated to regional scales, even at daily time steps. Thus constrained, numerical weather prediction precipitation fields do a reasonable job of representing large-scale diurnal variations.
Drovetski, Sergei V.; Raković, Marko; Semenov, Georgy; Fadeev, Igor V.; Red’kin, Yaroslav A.
2014-01-01
Phylogeographic studies of Holarctic birds are challenging because they involve vast geographic scale, complex glacial history, extensive phenotypic variation, and heterogeneous taxonomic treatment across countries, all of which require large sample sizes. Knowledge about the quality of phylogeographic information provided by different loci is crucial for study design. We use sequences of one mtDNA gene, one sex-linked intron, and one autosomal intron to elucidate large scale phylogeographic patterns in the Holarctic lark genus Eremophila. The mtDNA ND2 gene identified six geographically, ecologically, and phenotypically concordant clades in the Palearctic that diverged in the Early - Middle Pleistocene and suggested paraphyly of the horned lark (E. alpestris) with respect to the Temminck's lark (E. bilopha). In the Nearctic, ND2 identified five subclades which diverged in the Late Pleistocene. They overlapped geographically and were not concordant phenotypically or ecologically. Nuclear alleles provided little information on geographic structuring of genetic variation in horned larks beyond supporting the monophyly of Eremophila and paraphyly of the horned lark. Multilocus species trees based on two nuclear or all three loci provided poor support for haplogroups identified by mtDNA. The node ages calculated using mtDNA were consistent with the available paleontological data, whereas individual nuclear loci and multilocus species trees appeared to underestimate node ages. We argue that mtDNA is capable of discovering independent evolutionary units within avian taxa and can provide a reasonable phylogeographic hypothesis when geographic scale, geologic history, and phenotypic variation in the study system are too complex for proposing reasonable a priori hypotheses required for multilocus methods. Finally, we suggest splitting the currently recognized horned lark into five Palearctic and one Nearctic species. PMID:24498139
Techniques that Link Extreme Events to the Large Scale, Applied to California Heat Waves
NASA Astrophysics Data System (ADS)
Grotjahn, R.
2015-12-01
Understanding the mechanisms how Californian Central Valley (CCV) summer extreme hot spells develop is very important since the events have major impacts on the economy and human safety. Results from a series of CCV heat wave studies will be presented, emphasizing the techniques used. Key larger scale elements are identified statistically that are also consistent with synoptic and dynamic understanding of what must be present during extreme heat. Beyond providing a clear synoptic explanation, these key elements have high predictability, in part because soil moisture has little annual variation in the heavily-irrigated CCV. In turn, the predictability naturally leads to an effective tool to assess climate model simulation of these heat waves in historical and future climate scenarios. (Does the model develop extreme heat for the correct reasons?) Further work identified that these large scale elements arise in two quite different ways: one from expansion southwestward of a pre-existing heat wave in southwest Canada, the other formed in place from parcels traversing the North Pacific. The pre-existing heat wave explains an early result showing correlation between heat waves in Sacramento California, and other locations along the US west coast, including distant Seattle Washington. CCV heat waves can be preceded by unusually strong tropical Indian Ocean and Indonesian convection, this partial link may occur through an Asian subtropical jet wave guide. Another link revealed by diagnostics is a middle and higher latitude source of wave activity in Siberia and East Asia that also leads to the development of the CCV heat wave. This talk will address as many of these results and the tools used to obtain them as is reasonable within the available time.
Properties of galaxies reproduced by a hydrodynamic simulation
NASA Astrophysics Data System (ADS)
Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Sijacki, D.; Xu, D.; Snyder, G.; Bird, S.; Nelson, D.; Hernquist, L.
2014-05-01
Previous simulations of the growth of cosmic structures have broadly reproduced the `cosmic web' of galaxies that we see in the Universe, but failed to create a mixed population of elliptical and spiral galaxies, because of numerical inaccuracies and incomplete physical models. Moreover, they were unable to track the small-scale evolution of gas and stars to the present epoch within a representative portion of the Universe. Here we report a simulation that starts 12 million years after the Big Bang, and traces 13 billion years of cosmic evolution with 12 billion resolution elements in a cube of 106.5 megaparsecs a side. It yields a reasonable population of ellipticals and spirals, reproduces the observed distribution of galaxies in clusters and characteristics of hydrogen on large scales, and at the same time matches the `metal' and hydrogen content of galaxies on small scales.
Johannessen, Liv Karen; Obstfelder, Aud; Lotherington, Ann Therese
2013-05-01
The purpose of this paper is to explore the making and scaling of information infrastructures, as well as how the conditions for scaling a component may change for the vendor. The first research question is how the making and scaling of a healthcare information infrastructure can be done and by whom. The second question is what scope for manoeuvre there might be for vendors aiming to expand their market. This case study is based on an interpretive approach, whereby data is gathered through participant observation and semi-structured interviews. A case study of the making and scaling of an electronic system for general practitioners ordering laboratory services from hospitals is described as comprising two distinct phases. The first may be characterized as an evolving phase, when development, integration and implementation were achieved in small steps, and the vendor, together with end users, had considerable freedom to create the solution according to the users' needs. The second phase was characterized by a large-scale procurement process over which regional healthcare authorities exercised much more control and the needs of groups other than the end users influenced the design. The making and scaling of healthcare information infrastructures is not simply a process of evolution, in which the end users use and change the technology. It also consists of large steps, during which different actors, including vendors and healthcare authorities, may make substantial contributions. This process requires work, negotiation and strategies. The conditions for the vendor may change dramatically, from considerable freedom and close relationships with users and customers in the small-scale development, to losing control of the product and being required to engage in more formal relations with customers in the wider public healthcare market. Onerous procurement processes may be one of the reasons why large-scale implementation of information projects in healthcare is difficult and slow. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
'Fracking', Induced Seismicity and the Critical Earth
NASA Astrophysics Data System (ADS)
Leary, P.; Malin, P. E.
2012-12-01
Issues of 'fracking' and induced seismicity are reverse-analogous to the equally complex issues of well productivity in hydrocarbon, geothermal and ore reservoirs. In low hazard reservoir economics, poorly producing wells and low grade ore bodies are many while highly producing wells and high grade ores are rare but high pay. With induced seismicity factored in, however, the same distribution physics reverses the high/low pay economics: large fracture-connectivity systems are hazardous hence low pay, while high probability small fracture-connectivity systems are non-hazardous hence high pay. Put differently, an economic risk abatement tactic for well productivity and ore body pay is to encounter large-scale fracture systems, while an economic risk abatement tactic for 'fracking'-induced seismicity is to avoid large-scale fracture systems. Well productivity and ore body grade distributions arise from three empirical rules for fluid flow in crustal rock: (i) power-law scaling of grain-scale fracture density fluctuations; (ii) spatial correlation between spatial fluctuations in well-core porosity and the logarithm of well-core permeability; (iii) frequency distributions of permeability governed by a lognormality skewness parameter. The physical origin of rules (i)-(iii) is the universal existence of a critical-state-percolation grain-scale fracture-density threshold for crustal rock. Crustal fractures are effectively long-range spatially-correlated distributions of grain-scale defects permitting fluid percolation on mm to km scales. The rule is, the larger the fracture system the more intense the percolation throughput. As percolation pathways are spatially erratic and unpredictable on all scales, they are difficult to model with sparsely sampled well data. Phenomena such as well productivity, induced seismicity, and ore body fossil fracture distributions are collectively extremely difficult to predict. Risk associated with unpredictable reservoir well productivity and ore body distributions can be managed by operating in a context which affords many small failures for a few large successes. In reverse view, 'fracking' and induced seismicity could be rationally managed in a context in which many small successes can afford a few large failures. However, just as there is every incentive to acquire information leading to higher rates of productive well drilling and ore body exploration, there are equal incentives for acquiring information leading to lower rates of 'fracking'-induced seismicity. Current industry practice of using an effective medium approach to reservoir rock creates an uncritical sense that property distributions in rock are essentially uniform. Well-log data show that the reverse is true: the larger the length scale the greater the deviation from uniformity. Applying the effective medium approach to large-scale rock formations thus appears to be unnecessarily hazardous. It promotes the notion that large scale fluid pressurization acts against weakly cohesive but essentially uniform rock to produce large-scale quasi-uniform tensile discontinuities. Indiscriminate hydrofacturing appears to be vastly more problematic in reality than as pictured by the effective medium hypothesis. The spatial complexity of rock, especially at large scales, provides ample reason to find more controlled pressurization strategies for enhancing in situ flow.
2013-01-01
The 2002, 2007, and 2012 complementary medicine questionnaires fielded on the National Health Interview Survey provide the most comprehensive data on complementary medicine available for the United States. They filled the void for large-scale, nationally representative, publicly available datasets on the out-of-pocket costs, prevalence, and reasons for use of complementary medicine in the U.S. Despite their wide use, this is the first article describing the multi-faceted and largely qualitative processes undertaken to develop the surveys. We hope this in-depth description enables policy makers and researchers to better judge the content validity and utility of the questionnaires and their resultant publications. PMID:24267412
Stussman, Barbara J; Bethell, Christina D; Gray, Caroline; Nahin, Richard L
2013-11-23
The 2002, 2007, and 2012 complementary medicine questionnaires fielded on the National Health Interview Survey provide the most comprehensive data on complementary medicine available for the United States. They filled the void for large-scale, nationally representative, publicly available datasets on the out-of-pocket costs, prevalence, and reasons for use of complementary medicine in the U.S. Despite their wide use, this is the first article describing the multi-faceted and largely qualitative processes undertaken to develop the surveys. We hope this in-depth description enables policy makers and researchers to better judge the content validity and utility of the questionnaires and their resultant publications.
New generation of elastic network models.
López-Blanco, José Ramón; Chacón, Pablo
2016-04-01
The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gender differences in reasons to quit smoking among adolescents.
Struik, Laura L; O'Loughlin, Erin K; Dugas, Erika N; Bottorff, Joan L; O'Loughlin, Jennifer L
2014-08-01
It is well established that many adolescents who smoke want to quit, but little is known about why adolescents want to quit and if reasons to quit differ across gender. The objective of this study was to determine if reasons to quit smoking differ in boys and girls. Data on the Adolescent Reasons for Quitting (ARFQ) scale were collected in mailed self-report questionnaires in 2010-2011 from 113 female and 83 male smokers aged 14-19 years participating in AdoQuest, a longitudinal cohort study of the natural course of the co-occurrence of health-compromising behaviors in children. Overall, the findings indicate that reasons to quit in boys and girls appear to be generally similar, although this finding may relate to a lack of gender-oriented items in the ARFQ scale. There is a need for continued research to develop and test reasons to quit scales for adolescents that include gender-oriented items. © The Author(s) 2013.
DOT National Transportation Integrated Search
2017-01-01
The 2005 Large Truck Crash Causation Study (LTCCS)i was the first-ever national study to attempt to determine the critical reasons and associated factors that contribute to serious large truck crashes. The LTCCS defines critical reason as the r...
Chaufan, Claudia
2007-10-01
I offer a critical perspective on a large-scale population study on gene-environment interactions and common diseases proposed by the US Secretary of Health and Human Services' Advisory Committee on Genetics, Health, and Society (SACGHS). I argue that for scientific and policy reasons this and similar studies have little to add to current knowledge about how to prevent, treat, or decrease inequalities in common diseases, all of which are major claims of the proposal. I use diabetes as an exemplar of the diseases that the study purports to illuminate. I conclude that the question is not whether the study will meet expectations or whether the current emphasis on a genetic paradigm is real or imagined, desirable or not. Rather, the question is why, given the flaws of the science underwriting the study, its assumptions remain unchallenged. Future research should investigate the reasons for this immunity from criticism and for the popularity of this and similar projects among laypersons as well as among intellectuals.
ERIC Educational Resources Information Center
Edelstein, Barry A.; Heisel, Marnin J.; McKee, Deborah R.; Martin, Ronald R.; Koven, Lesley P.; Duberstein, Paul R.; Britton, Peter C.
2009-01-01
Purpose: The purposes of these studies were to develop and initially evaluate the psychometric properties of the Reasons for Living Scale-Older Adult version (RFL-OA), an older adults version of a measure designed to assess reasons for living among individuals at risk for suicide. Design and Methods: Two studies are reported. Study 1 involved…
Measurement of kT splitting scales in W→ℓν events at [Formula: see text] with the ATLAS detector.
Aad, G; Abajyan, T; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdelalim, A A; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Acharya, B S; Adamczyk, L; Adams, D L; Addy, T N; Adelman, J; Adomeit, S; Adragna, P; Adye, T; Aefsky, S; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahles, F; Ahmad, A; Ahsan, M; Aielli, G; Åkesson, T P A; Akimoto, G; Akimov, A V; Alam, M A; Albert, J; Albrand, S; Aleksa, M; Aleksandrov, I N; Alessandria, F; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Aliev, M; Alimonti, G; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alon, R; Alonso, A; Alonso, F; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amelung, C; Ammosov, V V; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Arce, A T H; Arfaoui, S; Arguin, J-F; Argyropoulos, S; Arik, E; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Artamonov, A; Artoni, G; Arutinov, D; Asai, S; Ask, S; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Astbury, A; Atkinson, M; Auerbach, B; Auge, E; Augsten, K; Aurousseau, M; Avolio, G; Axen, D; Azuelos, G; Azuma, Y; Baak, M A; Baccaglioni, G; Bacci, C; Bach, A M; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagnaia, P; Bai, Y; Bailey, D C; Bain, T; Baines, J T; Baker, O K; Baker, S; Balek, P; Balli, F; Banas, E; Banerjee, P; Banerjee, Sw; Banfi, D; Bangert, A; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barber, T; Barberio, E L; Barberis, D; Barbero, M; Bardin, D Y; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartsch, V; Basye, A; Bates, R L; Batkova, L; Batley, J R; Battaglia, A; Battistin, M; Bauer, F; Bawa, H S; Beale, S; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becks, K H; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behar Harpaz, S; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellomo, M; Belloni, A; Beloborodova, O; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Benoit, M; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernat, P; Bernhard, R; Bernius, C; Bernlochner, F U; Berry, T; Bertella, C; Bertin, A; Bertolucci, F; Besana, M I; Besjes, G J; Besson, N; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biscarat, C; Bittner, B; Black, C W; Black, J E; Black, K M; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blocki, J; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Boddy, C R; Boehler, M; Boek, J; Boek, T T; Boelaert, N; Bogaerts, J A; Bogdanchikov, A; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Bolnet, N M; Bomben, M; Bona, M; Boonekamp, M; Bordoni, S; Borer, C; Borisov, A; Borissov, G; Borjanovic, I; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Bouchami, J; Boudreau, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozovic-Jelisavcic, I; Bracinik, J; Branchini, P; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Bremer, J; Brendlinger, K; Brenner, R; Bressler, S; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Broggi, F; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brown, G; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchanan, J; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Budick, B; Bugge, L; Bulekov, O; Bundock, A C; Bunse, M; Buran, T; Burckhart, H; Burdin, S; Burgess, T; Burke, S; Busato, E; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Buttar, C M; Butterworth, J M; Buttinger, W; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Caloi, R; Calvet, D; Calvet, S; Camacho Toro, R; Camarri, P; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Canale, V; Canelli, F; Canepa, A; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capriotti, D; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, A A; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Cascella, M; Caso, C; Castaneda-Miranda, E; Castillo Gimenez, V; Castro, N F; Cataldi, G; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caughron, S; Cavaliere, V; Cavalleri, P; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, K; Chang, P; Chapleau, B; Chapman, J D; Chapman, J W; Charlton, D G; Chavda, V; Chavez Barajas, C A; Cheatham, S; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, S; Chen, X; Chen, Y; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Cheung, S L; Chevalier, L; Chiefari, G; Chikovani, L; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choudalakis, G; Chouridou, S; Chow, B K B; Christidi, I A; Christov, A; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirilli, M; Cirkovic, P; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, B; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Colas, J; Cole, S; Colijn, A P; Collins, N J; Collins-Tooth, C; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Courneyea, L; Cowan, G; Cox, B E; Cranmer, K; Crépé-Renaudin, S; Crescioli, F; Cristinziani, M; Crosetti, G; Cuciuc, C-M; Cuenca Almenar, C; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Curtis, C J; Cuthbert, C; Cwetanski, P; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; D'Orazio, A; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dallaire, F; Dallapiccola, C; Dam, M; Damiani, D S; Danielsson, H O; Dao, V; Darbo, G; Darlea, G L; Darmora, S; Dassoulas, J A; Davey, W; Davidek, T; Davidson, N; Davidson, R; Davies, E; Davies, M; Davignon, O; Davison, A R; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; de Graat, J; De Groot, N; de Jong, P; De La Taille, C; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; De Zorzi, G; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Degenhardt, J; Del Peso, J; Del Prete, T; Delemontex, T; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demirkoz, B; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deviveiros, P O; Dewhurst, A; DeWilde, B; Dhaliwal, S; Dhullipudi, R; Di Ciaccio, A; Di Ciaccio, L; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Luise, S; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dindar Yagci, K; Dingfelder, J; Dinut, F; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Doan, T K O; Dobbs, M; Dobos, D; Dobson, E; Dodd, J; Doglioni, C; Doherty, T; Dohmae, T; Doi, Y; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donini, J; Dopke, J; Doria, A; Dos Anjos, A; Dotti, A; Dova, M T; Doyle, A T; Dressnandt, N; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Duda, D; Dudarev, A; Dudziak, F; Duerdoth, I P; Duflot, L; Dufour, M-A; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Duxfield, R; Dwuznik, M; Ebenstein, W L; Ebke, J; Eckweiler, S; Edson, W; Edwards, C A; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Eisenhandler, E; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, K; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Engelmann, R; Engl, A; Epp, B; Erdmann, J; Ereditato, A; Eriksson, D; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Espinal Curull, X; Esposito, B; Etienne, F; Etienvre, A I; Etzion, E; Evangelakou, D; Evans, H; Fabbri, L; Fabre, C; Facini, G; Fakhrutdinov, R M; Falciano, S; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farley, J; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Fatholahzadeh, B; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feligioni, L; Feng, C; Feng, E J; Fenyuk, A B; Ferencei, J; Fernando, W; Ferrag, S; Ferrando, J; Ferrara, V; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filthaut, F; Fincke-Keeler, M; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, J; Fisher, M J; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Fonseca Martin, T; Formica, A; Forti, A; Fortin, D; Fournier, D; Fowler, A J; Fox, H; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Frank, T; Franklin, M; Franz, S; Fraternali, M; Fratina, S; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gadatsch, S; Gadfort, T; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Gan, K K; Gandrajula, R P; Gao, Y S; Gaponenko, A; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gerlach, P; Gershon, A; Geweniger, C; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Gianotti, F; Gibbard, B; Gibson, A; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gillman, A R; Gingrich, D M; Ginzburg, J; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giovannini, P; Giraud, P F; Giugni, D; Giunta, M; Gjelsten, B K; Gladilin, L K; Glasman, C; Glatzer, J; Glazov, A; Glonti, G L; Goddard, J R; Godfrey, J; Godlewski, J; Goebel, M; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez Silva, M L; Gonzalez-Sevilla, S; Goodson, J J; Goossens, L; Göpfert, T; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorfine, G; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gough Eschrich, I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramstad, E; Grancagnolo, F; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Gray, J A; Graziani, E; Grebenyuk, O G; Greenshaw, T; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grigalashvili, N; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Groth-Jensen, J; Grybel, K; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gunther, J; Guo, B; Guo, J; Gutierrez, P; Guttman, N; Gutzwiller, O; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haas, S; Haber, C; Hadavand, H K; Hadley, D R; Haefner, P; Hajduk, Z; Hakobyan, H; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Handel, C; Hanke, P; Hansen, J R; Hansen, J B; Hansen, J D; Hansen, P H; Hansson, P; Hara, K; Harenberg, T; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Hartert, J; Hartjes, F; Haruyama, T; Harvey, A; Hasegawa, S; Hasegawa, Y; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayakawa, T; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heinemann, B; Heisterkamp, S; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, R C W; Henke, M; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Hernandez, C M; Hernández Jiménez, Y; Herrberg, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirsch, F; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holmgren, S O; Holy, T; Holzbauer, J L; Hong, T M; Hooft van Huysduynen, L; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, P J; Hsu, S-C; Hu, D; Hubacek, Z; Hubaut, F; Huegging, F; Huettmann, A; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibbotson, M; Ibragimov, I; Iconomidou-Fayard, L; Idarraga, J; Iengo, P; Igonkina, O; Ikegami, Y; Ikematsu, K; Ikeno, M; Iliadis, D; Ilic, N; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Ivashin, A V; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, J N; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Jantsch, A; Janus, M; Jared, R C; Jarlskog, G; Jeanty, L; Jeng, G-Y; Jen-La Plante, I; Jennens, D; Jenni, P; Jeske, C; Jež, P; Jézéquel, S; Jha, M K; Ji, H; Ji, W; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinnouchi, O; Joergensen, M D; Joffe, D; Johansen, M; Johansson, K E; Johansson, P; Johnert, S; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Joram, C; Jorge, P M; Joshi, K D; Jovicevic, J; Jovin, T; Ju, X; Jung, C A; Jungst, R M; Juranek, V; Jussel, P; Juste Rozas, A; Kabana, S; Kaci, M; Kaczmarska, A; Kadlecik, P; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalinin, S; Kama, S; Kanaya, N; Kaneda, M; Kaneti, S; Kanno, T; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karagounis, M; Karakostas, K; Karnevskiy, M; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Keener, P T; Kehoe, R; Keil, M; Keller, J S; Kenyon, M; Keoshkerian, H; Kepka, O; Kerschen, N; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharchenko, D; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kitamura, T; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klemetti, M; Klier, A; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klinkby, E B; Klioutchnikova, T; Klok, P F; Klous, S; Kluge, E-E; Kluge, T; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Ko, B R; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koenig, S; Koetsveld, F; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohn, F; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Kolesnikov, V; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Köneke, K; König, A C; Kono, T; Kononov, A I; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, S; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Krejci, F; Kretzschmar, J; Kreutzfeldt, K; Krieger, N; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, M K; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurata, M; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwee, R; La Rosa, A; La Rotonda, L; Labarga, L; Lablak, S; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laisne, E; Lambourne, L; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Larner, A; Lassnig, M; Laurelli, P; Lavorini, V; Lavrijsen, W; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, M; Legendre, M; Legger, F; Leggett, C; Lehmacher, M; Lehmann Miotto, G; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Lendermann, V; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leonhardt, K; Leontsinis, S; Lepold, F; Leroy, C; Lessard, J-R; Lester, C G; Lester, C M; Levêque, J; Levin, D; Levinson, L J; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, S; Li, X; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Limper, M; Lin, S C; Linde, F; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, D; Liu, J B; Liu, L; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Loh, C W; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, R E; Lopes, L; Lopez Mateos, D; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Losty, M J; Lou, X; Lounis, A; Loureiro, K F; Love, J; Love, P A; Lowe, A J; Lu, F; Lubatti, H J; Luci, C; Lucotte, A; Ludwig, D; Ludwig, I; Ludwig, J; Luehring, F; Lukas, W; Luminari, L; Lund, E; Lundberg, B; Lundberg, J; Lundberg, O; Lund-Jensen, B; Lundquist, J; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Maček, B; Machado Miguens, J; Macina, D; Mackeprang, R; Madar, R; Madaras, R J; Maddocks, H J; Mader, W F; Madsen, A; Maeno, M; Maeno, T; Magnoni, L; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Mahout, G; Maiani, C; Maidantchik, C; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Malecki, P; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V; Malyukov, S; Mamuzic, J; Manabe, A; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J A; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, A; Mapelli, L; March, L; Marchand, J F; Marchese, F; Marchiori, G; Marcisovsky, M; Marino, C P; Marroquim, F; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, J P; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martinez Outschoorn, V; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Matsunaga, H; Matsushita, T; Mättig, P; Mättig, S; Mattravers, C; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazur, M; Mazzaferro, L; Mazzanti, M; Mc Donald, J; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; Mclaughlan, T; McMahon, S J; McPherson, R A; Meade, A; Mechnich, J; Mechtel, M; Medinnis, M; Meehan, S; Meera-Lebbai, R; Meguro, T; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mendoza Navas, L; Meng, Z; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Meyer, J; Michal, S; Micu, L; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Miller, D W; Miller, R J; Mills, W J; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Miñano Moya, M; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitrevski, J; Mitsou, V A; Mitsui, S; Miyagawa, P S; Mjörnmark, J U; Moa, T; Moeller, V; Mohapatra, S; Mohr, W; Moles-Valls, R; Molfetas, A; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Mora Herrera, C; Moraes, A; Morange, N; Morel, J; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Möser, N; Moser, H G; Mosidze, M; Moss, J; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Muenstermann, D; Müller, T A; Munwes, Y; Murray, W J; Mussche, I; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Napier, A; Narayan, R; Nash, M; Nattermann, T; Naumann, T; Navarro, G; Neal, H A; Nechaeva, P Yu; Neep, T J; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neusiedl, A; Neves, R M; Nevski, P; Newcomer, F M; Newman, P R; Nguyen, D H; Nguyen Thi Hong, V; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Niedercorn, F; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsen, H; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Novakova, J; Nozaki, M; Nozka, L; Nuncio-Quiroz, A-E; Nunes Hanninger, G; Nunnemann, T; Nurse, E; O'Brien, B J; O'Neil, D C; O'Shea, V; Oakes, L B; Oakham, F G; Oberlack, H; Ocariz, J; Ochi, A; Ochoa, M I; Oda, S; Odaka, S; Odier, J; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohshima, T; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira, M; Oliveira Damazio, D; Oliver Garcia, E; Olivito, D; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Osuna, C; Otero Y Garzon, G; Ottersbach, J P; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Ouyang, Q; Ovcharova, A; Owen, M; Owen, S; Ozcan, V E; Ozturk, N; Pacheco Pages, A; Padilla Aranda, C; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Paleari, C P; Palestini, S; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Papadelis, A; Papadopoulou, Th D; Paramonov, A; Paredes Hernandez, D; Park, W; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pashapour, S; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, M; Pedraza Lopez, S; Pedraza Morales, M I; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penson, A; Penwell, J; Perez Cavalcanti, T; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrino, R; Perrodo, P; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, J; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Petschull, D; Petteni, M; Pezoa, R; Phan, A; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piec, S M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pizio, C; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poblaguev, A; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Poll, J; Polychronakos, V; Pomeroy, D; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospelov, G E; Pospisil, S; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Prabhu, R; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Pretzl, K; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Prudent, X; Przybycien, M; Przysiezniak, H; Psoroulas, S; Ptacek, E; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Pylypchenko, Y; Qian, J; Quadt, A; Quarrie, D R; Quayle, W B; Quilty, D; Raas, M; Radeka, V; Radescu, V; Radloff, P; Ragusa, F; Rahal, G; Rahimi, A M; Rajagopalan, S; Rammensee, M; Rammes, M; Randle-Conde, A S; Randrianarivony, K; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Reinsch, A; Reisinger, I; Relich, M; Rembser, C; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Resende, B; Reznicek, P; Rezvani, R; Richter, R; Richter-Was, E; Ridel, M; Rieck, P; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Rios, R R; Ritsch, E; Riu, I; Rivoltella, G; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Rocha de Lima, J G; Roda, C; Roda Dos Santos, D; Roe, A; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romeo, G; Romero Adam, E; Rompotis, N; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, A; Rose, M; Rosenbaum, G A; Rosendahl, P L; Rosenthal, O; Rosselet, L; Rossetti, V; Rossi, E; Rossi, L P; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Ruckstuhl, N; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rumyantsev, L; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ruzicka, P; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Salihagic, D; Salnikov, A; Salt, J; Salvachua Ferrando, B M; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santamarina Rios, C; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Saraiva, J G; Sarangi, T; Sarkisyan-Grinbaum, E; Sarrazin, B; Sarri, F; Sartisohn, G; Sasaki, O; Sasaki, Y; Sasao, N; Satsounkevitch, I; Sauvage, G; Sauvan, E; Sauvan, J B; Savard, P; Savinov, V; Savu, D O; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scannicchio, D A; Scarcella, M; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaelicke, A; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schram, M; Schroeder, C; Schroer, N; Schultens, M J; Schultes, J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwartzman, A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellden, B; Sellers, G; Seman, M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shank, J T; Shao, Q T; Shapiro, M; Shatalov, P B; Shaw, K; Sherwood, P; Shimizu, S; Shimojima, M; Shin, T; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sidoti, A; Siegert, F; Sijacki, Dj; Silbert, O; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinnari, L A; Skottowe, H P; Skovpen, K; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, B C; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snow, S W; Snow, J; Snyder, S; Sobie, R; Sodomka, J; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solfaroli Camillocci, E; Solodkov, A A; Solovyanov, O V; Solovyev, V; Soni, N; Sood, A; Sopko, V; Sopko, B; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A; South, D; Spagnolo, S; Spanò, F; Spighi, R; Spigo, G; Spiwoks, R; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Stahlman, J; Stamen, R; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Staude, A; Stavina, P; Steele, G; Steinbach, P; Steinberg, P; Stekl, I; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoerig, K; Stoicea, G; Stonjek, S; Strachota, P; Stradling, A R; Straessner, A; Strandberg, J; Strandberg, S; Strandlie, A; Strang, M; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Strong, J A; Stroynowski, R; Stugu, B; Stumer, I; Stupak, J; Sturm, P; Styles, N A; Su, D; Subramania, Hs; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Tackmann, K; Taffard, A; Tafirout, R; Taiblum, N; Takahashi, Y; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A; Tam, J Y C; Tamsett, M C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tani, K; Tannoury, N; Tapprogge, S; Tardif, D; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tassi, E; Tayalati, Y; Taylor, C; Taylor, F E; Taylor, G N; Taylor, W; Teinturier, M; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Terada, S; Terashi, K; Terron, J; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thoma, S; Thomas, J P; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tic, T; Tikhomirov, V O; Tikhonov, Y A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Tonoyan, A; Topfel, C; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Triplett, N; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiakiris, M; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsung, J-W; Tsuno, S; Tsybychev, D; Tua, A; Tudorache, A; Tudorache, V; Tuggle, J M; Turala, M; Turecek, D; Turk Cakir, I; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Tzanakos, G; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Urbaniec, D; Urquijo, P; Usai, G; Vacavant, L; Vacek, V; Vachon, B; Vahsen, S; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Berg, R; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Poel, E; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; Vanadia, M; Vandelli, W; Vaniachine, A; Vankov, P; Vannucci, F; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vassilakopoulos, V I; Vazeille, F; Vazquez Schroeder, T; Veloso, F; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinek, E; Vinogradov, V B; Virzi, J; Vitells, O; Viti, M; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vokac, P; Volpi, G; Volpi, M; Volpini, G; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorwerk, V; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, W; Wagner, P; Wahlen, H; Wahrmund, S; Wakabayashi, J; Walch, S; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watanabe, I; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, A T; Waugh, B M; Weber, M S; Webster, J S; Weidberg, A R; Weigell, P; Weingarten, J; Weiser, C; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Werth, M; Wessels, M; Wetter, J; Weydert, C; Whalen, K; White, A; White, M J; White, S; Whitehead, S R; Whiteson, D; Whittington, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilhelm, I; Wilkens, H G; Will, J Z; Williams, E; Williams, H H; Williams, S; Willis, W; Willocq, S; Wilson, J A; Wilson, M G; Wilson, A; Wingerter-Seez, I; Winkelmann, S; Winklmeier, F; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wong, W C; Wooden, G; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wraight, K; Wright, M; Wrona, B; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wynne, B M; Xella, S; Xiao, M; Xie, S; Xu, C; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, T; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yang, Z; Yanush, S; Yao, L; Yasu, Y; Yatsenko, E; Ye, J; Ye, S; Yen, A L; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D; Yu, D R; Yu, J; Yu, J; Yuan, L; Yurkewicz, A; Zabinski, B; Zaidan, R; Zaitsev, A M; Zambito, S; Zanello, L; Zanzi, D; Zaytsev, A; Zeitnitz, C; Zeman, M; Zemla, A; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, L; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, N; Zhou, Y; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhuravlov, V; Zibell, A; Zieminska, D; Zimin, N I; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zitoun, R; Živković, L; Zmouchko, V V; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zutshi, V; Zwalinski, L
A measurement of splitting scales, as defined by the k T clustering algorithm, is presented for final states containing a W boson produced in proton-proton collisions at a centre-of-mass energy of 7 TeV. The measurement is based on the full 2010 data sample corresponding to an integrated luminosity of 36 pb -1 which was collected using the ATLAS detector at the CERN Large Hadron Collider. Cluster splitting scales are measured in events containing W bosons decaying to electrons or muons. The measurement comprises the four hardest splitting scales in a k T cluster sequence of the hadronic activity accompanying the W boson, and ratios of these splitting scales. Backgrounds such as multi-jet and top-quark-pair production are subtracted and the results are corrected for detector effects. Predictions from various Monte Carlo event generators at particle level are compared to the data. Overall, reasonable agreement is found with all generators, but larger deviations between the predictions and the data are evident in the soft regions of the splitting scales.
How do glacier inventory data aid global glacier assessments and projections?
NASA Astrophysics Data System (ADS)
Hock, R.
2017-12-01
Large-scale glacier modeling relies heavily on datasets that are collected by many individuals across the globe, but managed and maintained in a coordinated fashion by international data centers. The Global Terrestrial Network for Glaciers (GTN-G) provides the framework for coordinating and making available a suite of data sets such as the Randolph Glacier Inventory (RGI), the Glacier Thickness Dataset or the World Glacier Inventory (WGI). These datasets have greatly increased our ability to assess global-scale glacier mass changes. These data have also been vital for projecting the glacier mass changes of all mountain glaciers in the world outside the Greenland and Antarctic ice sheet, a total >200,000 glaciers covering an area of more than 700,000 km2. Using forcing from 8 to 15 GCMs and 4 different emission scenarios, global-scale glacier evolution models project multi-model mean net mass losses of all glaciers between 7 cm and 24 cm sea-level equivalent by the end of the 21st century. Projected mass losses vary greatly depending on the choice of the forcing climate and emission scenario. Insufficiently constrained model parameters likely are an important reason for large differences found among these studies even when forced by the same emission scenario, especially on regional scales.
Vertical Transport Processes for Inert and Scavenged Species: TRACE-A Measurements
NASA Technical Reports Server (NTRS)
Chatfield, Robert B.; Chan, K. Roland (Technical Monitor)
1997-01-01
The TRACE-A mission of the NASA DC-8 aircraft made a large-scale survey of the tropical and subtropical atmosphere in September and October of 1992. Both In-situ measurements of CO (G. Sachsen NASA Langley) and aerosol size (J. Browell group, NASA Langley) provide excellent data sets with which to constrain vertical transport by planetary boundary layer mixing and deep-cloud cumulus convection. Lidar profiles of aerosol-induced scattering and ozone (also by Bremen) are somewhat require more subtle interpretation as tracers, but the vertical information on layering largely compensates for these complexities. The reason this DC-8 dataset is so useful is that very large areas of biomass burning over Africa and South America provide surface sources of appropriate sizes with which to characterize vertical and horizontal motions; the major limitation of our source description is that biomass burning patterns move considerably every few days, and daily burning inventories are a matter of concurrent, intensive research. We use the Penn State / NCAR MM5 model in an assimilation mode on the synoptic and intercontinental scale, and assess the success it shows in vertical transport descriptions. We find that the general level of emissions suggested by the climatological approach (Will. Has, U. of Montana) appears to be approximately correct, possibly a bit low, for this October, 1992, time period. Vertical transport in planetary boundary layer mixing to 5.5 kin was observed and reproduced in our simulations. Furthermore we find evidence that Blackader "transilient" or matrix-transport scheme is needed, but may require some adaptation in our tracer model: CO seems to exhibit very high values at the top of the planetary boundary layer, a process that stretches the eddy-diffusion parameterization. We will report on progress in improving the deep convective transport of carbon monoxide: the Grail scheme as we used it at 100 kin resolution did not transport enough material to the upper troposphere. We expect to be able to attribute this to either parameterization reasons (inadequacy of this parameterization at the large 100km scale) or other reasons. Nevertheless, the qualitative nature of deep transport by clouds shows up well in the simulations. As for scavengable species, the simulations predict tens of micrograms per standard cubic meter of smoke aerosol in the boundary layer. In a straightforward illustration of our simple bulk-mass scavenging parameterization, to one or two micrograms per standard cubic meter of smoke aerosol in the free troposphere just above the source regions: very high concentrations for the free troposphere. We expect to report on comparisons of these predictions to a variety of observations.
NASA Astrophysics Data System (ADS)
JW, Schramm; Jin, H.; Keeling, EG; Johnson, M.; Shin, HJ
2017-05-01
This paper reports on our use of a fine-grained learning progression to assess secondary students' reasoning through carbon-transforming processes (photosynthesis, respiration, biosynthesis). Based on previous studies, we developed a learning progression with four progress variables: explaining mass changes, explaining energy transformations, explaining subsystems, and explaining large-scale systems. For this study, we developed a 2-week teaching module integrating these progress variables. Students were assessed before and after instruction, with the learning progression framework driving data analysis. Our work revealed significant overall learning gains for all students, with the mean post-test person proficiency estimates higher by 0.6 logits than the pre-test proficiency estimates. Further, instructional effects were statistically similar across all grades included in the study (7th-12th) with students in the lowest third of initial proficiency evidencing the largest learning gains. Students showed significant gains in explaining the processes of photosynthesis and respiration and in explaining transformations of mass and energy, areas where prior research has shown that student misconceptions are prevalent. Student gains on items about large-scale systems were higher than with other variables (although absolute proficiency was still lower). Gains across each of the biological processes tested were similar, despite the different levels of emphasis each had in the teaching unit. Together, these results indicate that students can benefit from instruction addressing these processes more explicitly. This requires pedagogical design quite different from that usually practiced with students at this level.
Rahul, P R C; Bhawar, R L; Ayantika, D C; Panicker, A S; Safai, P D; Tharaprabhakaran, V; Padmakumari, B; Raju, M P
2014-01-14
First ever 3-day aircraft observations of vertical profiles of Black Carbon (BC) were obtained during the Cloud Aerosol Interaction and Precipitation Enhancement Experiment (CAIPEEX) conducted on 30(th) August, 4(th) and 6(th) September 2009 over Guwahati (26° 11'N, 91° 44'E), the largest metropolitan city in the Brahmaputra River Valley (BRV) region. The results revealed that apart from the surface/near surface loading of BC due to anthropogenic processes causing a heating of 2 K/day, the large-scale Walker and Hadley atmospheric circulations associated with the Indian summer monsoon help in the formation of a second layer of black carbon in the upper atmosphere, which generates an upper atmospheric heating of ~2 K/day. Lofting of BC aerosols by these large-scale circulating atmospheric cells to the upper atmosphere (4-6 Km) could also be the reason for extreme climate change scenarios that are being witnessed in the BRV region.
A new statistical model for subgrid dispersion in large eddy simulations of particle-laden flows
NASA Astrophysics Data System (ADS)
Muela, Jordi; Lehmkuhl, Oriol; Pérez-Segarra, Carles David; Oliva, Asensi
2016-09-01
Dispersed multiphase turbulent flows are present in many industrial and commercial applications like internal combustion engines, turbofans, dispersion of contaminants, steam turbines, etc. Therefore, there is a clear interest in the development of models and numerical tools capable of performing detailed and reliable simulations about these kind of flows. Large Eddy Simulations offer good accuracy and reliable results together with reasonable computational requirements, making it a really interesting method to develop numerical tools for particle-laden turbulent flows. Nonetheless, in multiphase dispersed flows additional difficulties arises in LES, since the effect of the unresolved scales of the continuous phase over the dispersed phase is lost due to the filtering procedure. In order to solve this issue a model able to reconstruct the subgrid velocity seen by the particles is required. In this work a new model for the reconstruction of the subgrid scale effects over the dispersed phase is presented and assessed. This innovative methodology is based in the reconstruction of statistics via Probability Density Functions (PDFs).
Chiang, Michael F; Starren, Justin B
2002-01-01
The successful implementation of clinical information systems is difficult. In examining the reasons and potential solutions for this problem, the medical informatics community may benefit from the lessons of a rich body of software engineering and management literature about the failure of software projects. Based on previous studies, we present a conceptual framework for understanding the risk factors associated with large-scale projects. However, the vast majority of existing literature is based on large, enterprise-wide systems, and it unclear whether those results may be scaled down and applied to smaller projects such as departmental medical information systems. To examine this issue, we discuss the case study of a delayed electronic medical record implementation project in a small specialty practice at Columbia-Presbyterian Medical Center. While the factors contributing to the delay of this small project share some attributes with those found in larger organizations, there are important differences. The significance of these differences for groups implementing small medical information systems is discussed.
Mining influence on underground water resources in arid and semiarid regions
NASA Astrophysics Data System (ADS)
Luo, A. K.; Hou, Y.; Hu, X. Y.
2018-02-01
Coordinated mining of coal and water resources in arid and semiarid regions has traditionally become a focus issue. The research takes Energy and Chemical Base in Northern Shaanxi as an example, and conducts statistical analysis on coal yield and drainage volume from several large-scale mines in the mining area. Meanwhile, research determines average water volume per ton coal, and calculates four typical years’ drainage volume in different mining intensity. Then during mining drainage, with the combination of precipitation observation data in recent two decades and water level data from observation well, the calculation of groundwater table, precipitation infiltration recharge, and evaporation capacity are performed. Moreover, the research analyzes the transforming relationship between surface water, mine water, and groundwater. The result shows that the main reason for reduction of water resources quantity and transforming relationship between surface water, groundwater, and mine water is massive mine drainage, which is caused by large-scale coal mining in the research area.
Propagation of electromagnetic waves in a turbulent medium
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Hartke, G. J.
1986-01-01
Theoretical modeling of the wealth of experimental data on propagation of electromagnetic radiation through turbulent media has centered on the use of the Heisenberg-Kolmogorov (HK) model, which is, however, valid only for medium to small sized eddies. Ad hoc modifications of the HK model to encompass the large-scale region of the eddy spectrum have been widely used, but a sound physical basis has been lacking. A model for large-scale turbulence that was recently proposed is applied to the above problem. The spectral density of the temperature field is derived and used to calculate the structure function of the index of refraction N. The result is compared with available data, yielding a reasonably good fit. The variance of N is also in accord with the data. The model is also applied to propagation effects. The phase structure function, covariance of the log amplitude, and variance of the log intensity are calculated. The calculated phase structure function is in excellent agreement with available data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keshner, M. S.; Arya, R.
2004-10-01
Hewlett Packard has created a design for a ''Solar City'' factory that will process 30 million sq. meters of glass panels per year and produce 2.1-3.6 GW of solar panels per year-100x the volume of a typical, thin-film, solar panel manufacturer in 2004. We have shown that with a reasonable selection of materials, and conservative assumptions, this ''Solar City'' can produce solar panels and hit the price target of $1.00 per peak watt (6.5x-8.5x lower than prices in 2004) as the total price for a complete and installed rooftop (or ground mounted) solar energy system. This breakthrough in the pricemore » of solar energy comes without the need for any significant new invention. It comes entirely from the manufacturing scale of a large plant and the cost savings inherent in operating at such a large manufacturing scale. We expect that further optimizations from these simple designs will lead to further improvements in cost. The manufacturing process and cost depend on the choice for the active layer that converts sunlight into electricity. The efficiency by which sunlight is converted into electricity can range from 7% to 15%. This parameter has a large effect on the overall price per watt. There are other impacts, as well, and we have attempted to capture them without creating undue distractions. Our primary purpose is to demonstrate the impact of large-scale manufacturing. This impact is largely independent of the choice of active layer. It is not our purpose to compare the pro's and con's for various types of active layers. Significant improvements in cost per watt can also come from scientific advances in active layers that lead to higher efficiency. But, again, our focus is on manufacturing gains and not on the potential advances in the basic technology.« less
NASA Technical Reports Server (NTRS)
Gross, S. H.
1981-01-01
The ASTP Doppler data were recalibrated, analyzed and related to geophysical phenomena and found consistent. Spectra were computed for data intervals covering each hemisphere. As many as 14 such intervals were analyzed. Wave structure is seen in much of the data. The spectra for all those intervals are very similar in a number of respects. They all decrease with frequency, or with decreasing wavelength. Power law fits are reasonable and spectral indices are found to range from about -2.0 to about -3.5. Both large scale (thousands of kilometers) and medium scale (hundreds of kilometers) waves are evident. These spectra are very similar to spectra of in situ measurements of neutrals and ionization measured by Atmosphere Explorer C.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
A Marine Origin for the Meridiani Planum Landing Site?
NASA Technical Reports Server (NTRS)
Parker, T. J.; Haldemann, A. F.
2005-01-01
The Opportunity instruments have provided compelling evidence that the sulfate-rich chemical and siliciclastic sediments at the Meridiani Planum landing site were deposited in shallow water. The local paleo-environment is most often characterized as a broad, shallow sea or large playa, with surface conditions cycling between wet and dry episodes, interbedding evaporates with eolian fine sediments [e.g., 1,2]. This particular working hypothesis is reasonable, considering the area characterized by the rover s mobility. An alternative, marine origin will be considered here, a working hypothesis that we feel provides a better fit to the local-scale results identified by Opportunity, and the regional-scale characteristics of Meridiani Planum provided by data from orbiting spacecraft, when considered together.
An evaluation of proposed acoustic treatments for the NASA LaRC 4 x 7 meter wind tunnel
NASA Technical Reports Server (NTRS)
Abrahamson, A. L.
1985-01-01
The NASA LaRC 4 x 7 Meter Wind Tunnel is an existing facility specially designed for powered low speed (V/STOL) testing of large scale fixed wing and rotorcraft models. The enhancement of the facility for scale model acoustic testing is examined. The results are critically reviewed and comparisons are drawn with a similar wind tunnel (the DNW Facility in the Netherlands). Discrepancies observed in the comparison stimulated a theoretical investigation using the acoustic finite element ADAM System, of the ways in which noise propagating around the tunnel circuit radiates into the open test section. The reasons for the discrepancies noted above are clarified and assists in the selection of acoustic treatment options for the facility.
Traveling Weather Disturbances in Mars Southern Extratropics: Sway of the Great Impact Basins
NASA Technical Reports Server (NTRS)
Hollingsworth, Jeffery L.
2016-01-01
As on Earth, between late autumn and early spring on Mars middle and high latitudes within its atmosphere support strong mean thermal contrasts between the equator and poles (i.e. "baroclinicity"). Data collected during the Viking era and observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports vigorous, large-scale eastward traveling weather systems (i.e. transient synoptic-period waves). Within a rapidly rotating, differentially heated, shallow atmosphere such as on Earth and Mars, such large-scale, extratropical weather disturbances are critical components of the global circulation. These wave-like disturbances act as agents in the transport of heat and momentum, and moreover generalized tracer quantities (e.g., atmospheric dust, water vapor and water-ice clouds) between low and high latitudes of the planet. The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a high-resolution Mars global climate model (Mars GCM). This global circulation model imposes interactively lifted (and radiatively active) dust based on a threshold value of the instantaneous surface stress. Compared to observations, the model exhibits a reasonable "dust cycle" (i.e. globally averaged, a more dusty atmosphere during southern spring and summer occurs). In contrast to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense synoptically. Influences of the zonally asymmetric (i.e. east-west varying) topography on southern large-scale weather disturbances are examined. Simulations that adapt Mars' full topography compared to simulations that utilize synthetic topographies emulating essential large-scale features of the southern middle latitudes indicate that Mars' transient barotropic/baroclinic eddies are significantly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). In addition, the occurrence of a southern storm zone in late winter and early spring is keyed particularly to the western hemisphere via orographic influences arising from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate fundamental differences amongst such simulations and these are described.
Traveling Weather Disturbances in Mars' Southern Extratropics: Sway of the Great Impact Basins
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.
2016-04-01
As on Earth, between late autumn and early spring on Mars middle and high latitudes within its atmosphere support strong mean thermal contrasts between the equator and poles (i.e., "baroclinicity"). Data collected during the Viking era and observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports vigorous, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Within a rapidly rotating, differentially heated, shallow atmosphere such as on Earth and Mars, such large-scale, extratropical weather disturbances are critical components of the global circulation. These wave-like disturbances act as agents in the transport of heat and momentum, and moreover generalized tracer quantities (e.g., atmospheric dust, water vapor and water-ice clouds) between low and high latitudes of the planet. The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a high-resolution Mars global climate model (Mars GCM). This global circulation model imposes interactively lifted (and radiatively active) dust based on a threshold value of the instantaneous surface stress. Compared to observations, the model exhibits a reasonable "dust cycle" (i.e., globally averaged, a more dusty atmosphere during southern spring and summer occurs). In contrast to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense synoptically. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather disturbances are examined. Simulations that adapt Mars' full topography compared to simulations that utilize synthetic topographies emulating essential large-scale features of the southern middle latitudes indicate that Mars' transient barotropic/baroclinic eddies are significantly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). In addition, the occurrence of a southern storm zone in late winter and early spring is keyed particularly to the western hemisphere via orographic influences arising from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate fundamental differences amongst such simulations and these are described.
NASA Astrophysics Data System (ADS)
Newman, Joan T.
Any change, particularly on a large scale like a sequence change in a district with 75,000 students, is difficult. However, with the advent of the new TAKS science test and the new requirements for high school graduation in the state of Texas, educators and students alike are engaged in innovative educational approaches to meet these requirements. This study investigated a different, non-traditional science sequence to investigate relationships among secondary core-science course sequencing, student science-reasoning performance, and classroom pedagogy. The methodology adopted in the study led to a deeper understanding of the successes and challenges faced by teachers in teaching conceptual physics and chemistry to 8 th and 9th grade students. The qualitative analysis suggested a difference in pedagogy employed by middle and high school science teachers and a need for secondary science teachers to enhance their content knowledge and pedagogical skills, as well as change their underlying attitudes and beliefs about the abilities of students. The study examined scores of 495 randomly chosen students following three different matriculation patterns within one large independent school district. The study indicated that students who follow a sequence with 9th grade IPC generally increase their science-reasoning skills as demonstrated on the 10th grade TAKS science test when these scores are compared with those of students who do not have 9th grade IPC in the science sequence.
NASA Astrophysics Data System (ADS)
Chen, Yu; Wang, Qihua; Wang, Tingmei
2015-10-01
The core-shell structured mesoporous silica nanomaterials (MSNs) are experiencing rapid development in many applications such as heterogeneous catalysis, bio-imaging and drug delivery wherein a large pore volume is desirable. We develop a one-pot method for large-scale synthesis of brain-like mesoporous silica nanocomposites based on the reasonable change of the intrinsic nature of the -Si-O-Si- framework of silica nanoparticles together with a selective etching strategy. The as-synthesized products show good monodispersion and a large pore volume of 1.0 cm3 g-1. The novelty of this approach lies in the use of an inorganic-organic hybrid layer to assist the creation of large-pore morphology on the outermost shell thereby promoting efficient mass transfer or storage. Importantly, the method is reliable and grams of products can be easily prepared. The morphology on the outermost silica shell can be controlled by simply adjusting the VTES-to-TEOS molar ratio (VTES: triethoxyvinylsilane, TEOS: tetraethyl orthosilicate) as well as the etching time. The as-synthesized products exhibit fluorescence performance by incorporating rhodamine B isothiocyanate (RITC) covalently into the inner silica walls, which provide potential application in bioimaging. We also demonstrate the applications of as-synthesized large-pore structured nanocomposites in drug delivery systems and stimuli-responsive nanoreactors for heterogeneous catalysis.The core-shell structured mesoporous silica nanomaterials (MSNs) are experiencing rapid development in many applications such as heterogeneous catalysis, bio-imaging and drug delivery wherein a large pore volume is desirable. We develop a one-pot method for large-scale synthesis of brain-like mesoporous silica nanocomposites based on the reasonable change of the intrinsic nature of the -Si-O-Si- framework of silica nanoparticles together with a selective etching strategy. The as-synthesized products show good monodispersion and a large pore volume of 1.0 cm3 g-1. The novelty of this approach lies in the use of an inorganic-organic hybrid layer to assist the creation of large-pore morphology on the outermost shell thereby promoting efficient mass transfer or storage. Importantly, the method is reliable and grams of products can be easily prepared. The morphology on the outermost silica shell can be controlled by simply adjusting the VTES-to-TEOS molar ratio (VTES: triethoxyvinylsilane, TEOS: tetraethyl orthosilicate) as well as the etching time. The as-synthesized products exhibit fluorescence performance by incorporating rhodamine B isothiocyanate (RITC) covalently into the inner silica walls, which provide potential application in bioimaging. We also demonstrate the applications of as-synthesized large-pore structured nanocomposites in drug delivery systems and stimuli-responsive nanoreactors for heterogeneous catalysis. Electronic supplementary information (ESI) available: The average particle size distribution of LPASN-1, LPASN-2 and LPASN-3; the wide-angle XRD pattern of LPASN-2/LPASN-3/LPASN-4; the catalytic properties of LPASN-PNIPAM at different temperatures (15 °C and 33 °C). See DOI: 10.1039/c5nr04123f
Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow
NASA Technical Reports Server (NTRS)
Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.
1977-01-01
An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.
Mechanisms of diurnal precipitation over the US Great Plains: a cloud resolving model perspective
NASA Astrophysics Data System (ADS)
Lee, Myong-In; Choi, Ildae; Tao, Wei-Kuo; Schubert, Siegfried D.; Kang, In-Sik
2010-02-01
The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program’s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.
NASA Technical Reports Server (NTRS)
Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.
2010-01-01
The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.
NASA Astrophysics Data System (ADS)
Parajuli, Sagar Prasad; Yang, Zong-Liang; Lawrence, David M.
2016-06-01
Large amounts of mineral dust are injected into the atmosphere during dust storms, which are common in the Middle East and North Africa (MENA) where most of the global dust hotspots are located. In this work, we present simulations of dust emission using the Community Earth System Model Version 1.2.2 (CESM 1.2.2) and evaluate how well it captures the spatio-temporal characteristics of dust emission in the MENA region with a focus on large-scale dust storm mobilization. We explicitly focus our analysis on the model's two major input parameters that affect the vertical mass flux of dust-surface winds and the soil erodibility factor. We analyze dust emissions in simulations with both prognostic CESM winds and with CESM winds that are nudged towards ERA-Interim reanalysis values. Simulations with three existing erodibility maps and a new observation-based erodibility map are also conducted. We compare the simulated results with MODIS satellite data, MACC reanalysis data, AERONET station data, and CALIPSO 3-d aerosol profile data. The dust emission simulated by CESM, when driven by nudged reanalysis winds, compares reasonably well with observations on daily to monthly time scales despite CESM being a global General Circulation Model. However, considerable bias exists around known high dust source locations in northwest/northeast Africa and over the Arabian Peninsula where recurring large-scale dust storms are common. The new observation-based erodibility map, which can represent anthropogenic dust sources that are not directly represented by existing erodibility maps, shows improved performance in terms of the simulated dust optical depth (DOD) and aerosol optical depth (AOD) compared to existing erodibility maps although the performance of different erodibility maps varies by region.
Forest-fire model as a supercritical dynamic model in financial systems
NASA Astrophysics Data System (ADS)
Lee, Deokjae; Kim, Jae-Young; Lee, Jeho; Kahng, B.
2015-02-01
Recently large-scale cascading failures in complex systems have garnered substantial attention. Such extreme events have been treated as an integral part of self-organized criticality (SOC). Recent empirical work has suggested that some extreme events systematically deviate from the SOC paradigm, requiring a different theoretical framework. We shed additional theoretical light on this possibility by studying financial crisis. We build our model of financial crisis on the well-known forest fire model in scale-free networks. Our analysis shows a nontrivial scaling feature indicating supercritical behavior, which is independent of system size. Extreme events in the supercritical state result from bursting of a fat bubble, seeds of which are sown by a protracted period of a benign financial environment with few shocks. Our findings suggest that policymakers can control the magnitude of financial meltdowns by keeping the economy operating within reasonable duration of a benign environment.
A gyrofluid description of Alfvenic turbulence and its parallel electric field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bian, N. H.; Kontar, E. P.
2010-06-15
Anisotropic Alfvenic fluctuations with k{sub ||}/k{sub perpendicular}<<1 remain at frequencies much smaller than the ion cyclotron frequency in the presence of a strong background magnetic field. Based on the simplest truncation of the electromagnetic gyrofluid equations in a homogeneous plasma, a model for the energy cascade produced by Alfvenic turbulence is constructed, which smoothly connects the large magnetohydrodynamics scales and the small 'kinetic' scales. Scaling relations are obtained for the electromagnetic fluctuations, as a function of k{sub perpendicular} and k{sub ||}. Moreover, a particular attention is paid to the spectral structure of the parallel electric field which is produced bymore » Alfvenic turbulence. The reason is the potential implication of this parallel electric field in turbulent acceleration and transport of particles. For electromagnetic turbulence, this issue was raised some time ago in Hasegawa and Mima [J. Geophys. Res. 83, 1117 (1978)].« less
Properties of galaxies reproduced by a hydrodynamic simulation.
Vogelsberger, M; Genel, S; Springel, V; Torrey, P; Sijacki, D; Xu, D; Snyder, G; Bird, S; Nelson, D; Hernquist, L
2014-05-08
Previous simulations of the growth of cosmic structures have broadly reproduced the 'cosmic web' of galaxies that we see in the Universe, but failed to create a mixed population of elliptical and spiral galaxies, because of numerical inaccuracies and incomplete physical models. Moreover, they were unable to track the small-scale evolution of gas and stars to the present epoch within a representative portion of the Universe. Here we report a simulation that starts 12 million years after the Big Bang, and traces 13 billion years of cosmic evolution with 12 billion resolution elements in a cube of 106.5 megaparsecs a side. It yields a reasonable population of ellipticals and spirals, reproduces the observed distribution of galaxies in clusters and characteristics of hydrogen on large scales, and at the same time matches the 'metal' and hydrogen content of galaxies on small scales.
Magnetic Reconnection and Particle Acceleration in the Solar Corona
NASA Astrophysics Data System (ADS)
Neukirch, Thomas
Reconnection plays a major role for the magnetic activity of the solar atmosphere, for example solar flares. An interesting open problem is how magnetic reconnection acts to redistribute the stored magnetic energy released during an eruption into other energy forms, e.g. gener-ating bulk flows, plasma heating and non-thermal energetic particles. In particular, finding a theoretical explanation for the observed acceleration of a large number of charged particles to high energies during solar flares is presently one of the most challenging problems in solar physics. One difficulty is the vast difference between the microscopic (kinetic) and the macro-scopic (MHD) scales involved. Whereas the phenomena observed to occur on large scales are reasonably well explained by the so-called standard model, this does not seem to be the case for the small-scale (kinetic) aspects of flares. Over the past years, observations, in particular by RHESSI, have provided evidence that a naive interpretation of the data in terms of the standard solar flare/thick target model is problematic. As a consequence, the role played by magnetic reconnection in the particle acceleration process during solar flares may have to be reconsidered.
Some aspects of large-scale travelling ionospheric disturbances
NASA Astrophysics Data System (ADS)
Bowman, G. G.
1992-06-01
On two occasions the speeds and directions of travel of large-scale traveling ionospheric disturbances (LS-TIDs) following geomagnetic substorm onsets, have been calculated for the propagation of these disturbances in both hemispheres of the earth. N(h) analyses have been used to produce height change profiles at a fixed frequency from which time shifts between stations (used for the speed and direction-of-travel values) have been calculated. Fixed-frequency phase path measurements at Bribie Island for two events reveal wavetrains with periodicities around 17 min associated with these disturbances. Another event recorded a periodicity of 19 min. Also, for two of the events additional periodicities around 30 min were found. These wavetrains along with the macroscale height changes and electron density depletions associated with these LS-TIDs are essentially the same as the ionospheric structure changes observed during the passage of night-time medium-scale traveling ionospheric disturbances (MS-TIDs). However, unlike these MS-TIDs, the LS-TIDs are generally not associated with the recording of spread-F on ionograms. Possible reasons for this difference are discussed as well as the special conditions which probably prevail on the few occasions when spread-F is associated with LS-TIDs.
Simulating Biomass Fast Pyrolysis at the Single Particle Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciesielski, Peter; Wiggins, Gavin; Daw, C Stuart
2017-07-01
Simulating fast pyrolysis at the scale of single particles allows for the investigation of the impacts of feedstock-specific parameters such as particle size, shape, and species of origin. For this reason particle-scale modeling has emerged as an important tool for understanding how variations in feedstock properties affect the outcomes of pyrolysis processes. The origins of feedstock properties are largely dictated by the composition and hierarchical structure of biomass, from the microstructural porosity to the external morphology of milled particles. These properties may be accounted for in simulations of fast pyrolysis by several different computational approaches depending on the level ofmore » structural and chemical complexity included in the model. The predictive utility of particle-scale simulations of fast pyrolysis can still be enhanced substantially by advancements in several areas. Most notably, considerable progress would be facilitated by the development of pyrolysis kinetic schemes that are decoupled from transport phenomena, predict product evolution from whole-biomass with increased chemical speciation, and are still tractable with present-day computational resources.« less
NASA Technical Reports Server (NTRS)
Lin, P.; Pratt, D. T.
1987-01-01
A hybrid method has been developed for the numerical prediction of turbulent mixing in a spatially-developing, free shear layer. Most significantly, the computation incorporates the effects of large-scale structures, Schmidt number and Reynolds number on mixing, which have been overlooked in the past. In flow field prediction, large-eddy simulation was conducted by a modified 2-D vortex method with subgrid-scale modeling. The predicted mean velocities, shear layer growth rates, Reynolds stresses, and the RMS of longitudinal velocity fluctuations were found to be in good agreement with experiments, although the lateral velocity fluctuations were overpredicted. In scalar transport, the Monte Carlo method was extended to the simulation of the time-dependent pdf transport equation. For the first time, the mixing frequency in Curl's coalescence/dispersion model was estimated by using Broadwell and Breidenthal's theory of micromixing, which involves Schmidt number, Reynolds number and the local vorticity. Numerical tests were performed for a gaseous case and an aqueous case. Evidence that pure freestream fluids are entrained into the layer by large-scale motions was found in the predicted pdf. Mean concentration profiles were found to be insensitive to Schmidt number, while the unmixedness was higher for higher Schmidt number. Applications were made to mixing layers with isothermal, fast reactions. The predicted difference in product thickness of the two cases was in reasonable quantitative agreement with experimental measurements.
Environmental impacts of large-scale CSP plants in northwestern China.
Wu, Zhiyong; Hou, Anping; Chang, Chun; Huang, Xiang; Shi, Duoqi; Wang, Zhifeng
2014-01-01
Several concentrated solar power demonstration plants are being constructed, and a few commercial plants have been announced in northwestern China. However, the mutual impacts between the concentrated solar power plants and their surrounding environments have not yet been addressed comprehensively in literature by the parties involved in these projects. In China, these projects are especially important as an increasing amount of low carbon electricity needs to be generated in order to maintain the current economic growth while simultaneously lessening pollution. In this study, the authors assess the potential environmental impacts of large-scale concentrated solar power plants. Specifically, the water use intensity, soil erosion and soil temperature are quantitatively examined. It was found that some of the impacts are favorable, while some impacts are negative in relation to traditional power generation techniques and some need further research before they can be reasonably appraised. In quantitative terms, concentrated solar power plants consume about 4000 L MW(-1) h(-1) of water if wet cooling technology is used, and the collectors lead to the soil temperature changes of between 0.5 and 4 °C; however, it was found that the soil erosion is dramatically alleviated. The results of this study are helpful to decision-makers in concentrated solar power site selection and regional planning. Some conclusions of this study are also valid for large-scale photovoltaic plants.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1993-01-01
The period from 18 UTC 26 Nov. 1991 to roughly 23 UTC 26 Nov. 1991 is one of the study periods of the FIRE (First International Satellite Cloud Climatology Regional Experiment) 2 field campaign. The middle and upper tropospheric cloud data that was collected during this time allowed FIRE scientists to learn a great deal about the detailed structure, microphysics, and radiative characteristics of the mid latitude cirrus that occurred during that time. Modeling studies that range from the microphysical to the mesoscale are now underway attempting to piece the detailed knowledge of this cloud system into a coherent picture of the atmospheric processes important to cirrus cloud development and maintenance. An important component of the modeling work, either as an input parameter in the case of cloud-scale models, or as output in the case of meso and larger scale models, is the large scale forcing of the cloud system. By forcing we mean the synoptic scale vertical motions and moisture budget that initially send air parcels ascending and supply the water vapor to allow condensation during ascent. Defining this forcing from the synoptic scale to the cloud scale is one of the stated scientific objectives of the FIRE program. From the standpoint of model validation, it is also necessary that the vertical motions and large scale moisture budget of the case studies be derived from observations. It is considered important that the models used to simulate the observed cloud fields begin with the correct dynamics and that the dynamics be in the right place for the right reasons.
A practical large scale/high speed data distribution system using 8 mm libraries
NASA Technical Reports Server (NTRS)
Howard, Kevin
1993-01-01
Eight mm tape libraries are known primarily for their small size, large storage capacity, and low cost. However, many applications require an additional attribute which, heretofore, has been lacking -- high transfer rate. Transfer rate is particularly important in a large scale data distribution environment -- an environment in which 8 mm tape should play a very important role. Data distribution is a natural application for 8 mm for several reasons: most large laboratories have access to 8 mm tape drives, 8 mm tapes are upwardly compatible, 8 mm media are very inexpensive, 8 mm media are light weight (important for shipping purposes), and 8 mm media densely pack data (5 gigabytes now and 15 gigabytes on the horizon). If the transfer rate issue were resolved, 8 mm could offer a good solution to the data distribution problem. To that end Exabyte has analyzed four ways to increase its transfer rate: native drive transfer rate increases, data compression at the drive level, tape striping, and homogeneous drive utilization. Exabyte is actively pursuing native drive transfer rate increases and drive level data compression. However, for non-transmitted bulk data applications (which include data distribution) the other two methods (tape striping and homogeneous drive utilization) hold promise.
An Integrated Knowledge Framework to Characterize and Scaffold Size and Scale Cognition (FS2C)
NASA Astrophysics Data System (ADS)
Magana, Alejandra J.; Brophy, Sean P.; Bryan, Lynn A.
2012-09-01
Size and scale cognition is a critical ability associated with reasoning with concepts in different disciplines of science, technology, engineering, and mathematics. As such, researchers and educators have identified the need for young learners and their educators to become scale-literate. Informed by developmental psychology literature and recent findings in nanoscale science and engineering education, we propose an integrated knowledge framework for characterizing and scaffolding size and scale cognition called the FS2C framework. Five ad hoc assessment tasks were designed informed by the FS2C framework with the goal of identifying participants' understandings of size and scale. Findings identified participants' difficulties to discern different sizes of microscale and nanoscale objects and a low level of sophistication on identifying scale worlds among participants. Results also identified that as bigger the difference between the sizes of the objects is, the more difficult was for participants to identify how many times an object is bigger or smaller than another one. Similarly, participants showed difficulties to estimate approximate sizes of sub-macroscopic objects as well as a difficulty for participants to estimate the size of very large objects. Participants' accurate location of objects on a logarithmic scale was also challenging.
WAIS-IV subtest covariance structure: conceptual and statistical considerations.
Ward, L Charles; Bergman, Maria A; Hebert, Katina R
2012-06-01
D. Wechsler (2008b) reported confirmatory factor analyses (CFAs) with standardization data (ages 16-69 years) for 10 core and 5 supplemental subtests from the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). Analyses of the 15 subtests supported 4 hypothesized oblique factors (Verbal Comprehension, Working Memory, Perceptual Reasoning, and Processing Speed) but also revealed unexplained covariance between Block Design and Visual Puzzles (Perceptual Reasoning subtests). That covariance was not included in the final models. Instead, a path was added from Working Memory to Figure Weights (Perceptual Reasoning subtest) to improve fit and achieve a desired factor pattern. The present research with the same data (N = 1,800) showed that the path from Working Memory to Figure Weights increases the association between Working Memory and Matrix Reasoning. Specifying both paths improves model fit and largely eliminates unexplained covariance between Block Design and Visual Puzzles but with the undesirable consequence that Figure Weights and Matrix Reasoning are equally determined by Perceptual Reasoning and Working Memory. An alternative 4-factor model was proposed that explained theory-implied covariance between Block Design and Visual Puzzles and between Arithmetic and Figure Weights while maintaining compatibility with WAIS-IV Index structure. The proposed model compared favorably with a 5-factor model based on Cattell-Horn-Carroll theory. The present findings emphasize that covariance model comparisons should involve considerations of conceptual coherence and theoretical adherence in addition to statistical fit. (c) 2012 APA, all rights reserved
Return to normality after a radiological emergency.
Lochard, J; Prêtre, S
1995-01-01
Some preliminary considerations from the management of post-accident situations connected to large scale and high land contamination are presented. The return to normal, or at least acceptable living conditions, as soon as reasonably achievable, and the prevention of the possible emergence of a post-accident crisis is of key importance. A scheme is proposed for understanding the dynamics of the various phases after an accident. An attempt is made to characterize some of the parameters driving the acceptability of post-accident situations. Strategies to return to normal living conditions in contaminated areas are considered.
NASA Technical Reports Server (NTRS)
1975-01-01
A separation method to provide reasonable yields of high specificity isoenzymes for the purpose of large scale, early clinical diagnosis of diseases and organic damage such as, myocardial infarction, hepatoma, muscular dystrophy, and infectous disorders is presented. Preliminary development plans are summarized. An analysis of required research and development and production resources is included. The costs of such resources and the potential profitability of a commercial space processing opportunity for electrophoretic separation of high specificity isoenzymes are reviewed.
Interaction function of oscillating coupled neurons
Dodla, Ramana; Wilson, Charles J.
2013-01-01
Large scale simulations of electrically coupled neuronal oscillators often employ the phase coupled oscillator paradigm to understand and predict network behavior. We study the nature of the interaction between such coupled oscillators using weakly coupled oscillator theory. By employing piecewise linear approximations for phase response curves and voltage time courses, and parameterizing their shapes, we compute the interaction function for all such possible shapes and express it in terms of discrete Fourier modes. We find that reasonably good approximation is achieved with four Fourier modes that comprise of both sine and cosine terms. PMID:24229210
NASA Astrophysics Data System (ADS)
Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.
2016-10-01
An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.
NASA Technical Reports Server (NTRS)
Herbst, E.; Leung, C. M.
1986-01-01
In order to incorporate large ion-polar neutral rate coefficients into existing gas phase reaction networks, it is necessary to utilize simplified theoretical treatments because of the significant number of rate coefficients needed. The authors have used two simple theoretical treatments: the locked dipole approach of Moran and Hamill for linear polar neutrals and the trajectory scaling approach of Su and Chesnavich for nonlinear polar neutrals. The former approach is suitable for linear species because in the interstellar medium these are rotationally relaxed to a large extent and the incoming charged reactants can lock their dipoles into the lowest energy configuration. The latter approach is a better approximation for nonlinear neutral species, in which rotational relaxation is normally less severe and the incoming charged reactants are not as effective at locking the dipoles. The treatments are in reasonable agreement with more detailed long range theories and predict an inverse square root dependence on kinetic temperature for the rate coefficient. Compared with the locked dipole method, the trajectory scaling approach results in rate coefficients smaller by a factor of approximately 2.5.
A new fast scanning system for the measurement of large angle tracks in nuclear emulsions
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Buonaura, A.; Consiglio, L.; D'Ambrosio, N.; De Lellis, G.; Di Crescenzo, A.; Di Marco, N.; Galati, G.; Lauria, A.; Montesi, M. C.; Pupilli, F.; Shchedrina, T.; Tioukov, V.; Vladymyrov, M.
2015-11-01
Nuclear emulsions have been widely used in particle physics to identify new particles through the observation of their decays thanks to their unique spatial resolution. Nevertheless, before the advent of automatic scanning systems, the emulsion analysis was very demanding in terms of well trained manpower. Due to this reason, they were gradually replaced by electronic detectors, until the '90s, when automatic microscopes started to be developed in Japan and in Europe. Automatic scanning was essential to conceive large scale emulsion-based neutrino experiments like CHORUS, DONUT and OPERA. Standard scanning systems have been initially designed to recognize tracks within a limited angular acceptance (θ lesssim 30°) where θ is the track angle with respect to a line perpendicular to the emulsion plane. In this paper we describe the implementation of a novel fast automatic scanning system aimed at extending the track recognition to the full angular range and improving the present scanning speed. Indeed, nuclear emulsions do not have any intrinsic limit to detect particle direction. Such improvement opens new perspectives to use nuclear emulsions in several fields in addition to large scale neutrino experiments, like muon radiography, medical applications and dark matter directional detection.
Parallel methodology to capture cyclic variability in motored engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei
2016-07-28
Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less
INDUSTRIAL RESEARCH AT THE EASTERN TELEGRAPH COMPANY, 1872-1929
2015-01-01
By the late nineteenth century the submarine telegraph cable industry, which had blossomed in the 1850s, had reached what historians regard as technological maturity. For a host of commercial, cultural and technical reasons, the industry seems to have become conservative in its attitude towards technological development, which is reflected in the small scale of its staff and facilities for research and development. This paper argues that the attitude of the cable industry towards research and development was less conservative and altogether more complex than historians have suggested. Focusing on the crucial case of the Eastern Telegraph Company, the largest single operator of submarine cables, it shows how the company encouraged inventive activity among outside and in-house electricians and, in 1903, established a small research laboratory where staff and outside scientific advisors pursued new methods of cable signalling and cable designs. The scale of research and development at the Eastern Telegraph Company, however, was small by comparison to that of its nearest competitor, Western Union, and dwarfed by that of large electrical manufacturers. This paper explores the reasons for this comparatively weak provision but also suggests that this was not inappropriate for a service-sector firm. PMID:25977587
Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD
NASA Technical Reports Server (NTRS)
Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.
1998-01-01
Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.
Similarity Rules for Scaling Solar Sail Systems
NASA Technical Reports Server (NTRS)
Canfield, Stephen L.; Beard, James W., III; Peddieson, John; Ewing, Anthony; Garbe, Greg
2004-01-01
Future science missions will require solar sails on the order 10,000 sq m (or larger). However, ground and flight demonstrations must be conducted at significantly smaller Sizes (400 sq m for ground demo) due to limitations of ground-based facilities and cost and availability of flight opportunities. For this reason, the ability to understand the process of scalability, as it applies to solar sail system models and test data, is crucial to the advancement of this technology. This report will address issues of scaling in solar sail systems, focusing on structural characteristics, by developing a set of similarity or similitude functions that will guide the scaling process. The primary goal of these similarity functions (process invariants) that collectively form a set of scaling rules or guidelines is to establish valid relationships between models and experiments that are performed at different orders of scale. In the near term, such an effort will help guide the size and properties of a flight validation sail that will need to be flown to accurately represent a large, mission-level sail.
Software Engineering for Scientific Computer Simulations
NASA Astrophysics Data System (ADS)
Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.
2004-11-01
Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.
Morphological changes in polycrystalline Fe after compression and release
NASA Astrophysics Data System (ADS)
Gunkelmann, Nina; Tramontina, Diego R.; Bringa, Eduardo M.; Urbassek, Herbert M.
2015-02-01
Despite a number of large-scale molecular dynamics simulations of shock compressed iron, the morphological properties of simulated recovered samples are still unexplored. Key questions remain open in this area, including the role of dislocation motion and deformation twinning in shear stress release. In this study, we present simulations of homogeneous uniaxial compression and recovery of large polycrystalline iron samples. Our results reveal significant recovery of the body-centered cubic grains with some deformation twinning driven by shear stress, in agreement with experimental results by Wang et al. [Sci. Rep. 3, 1086 (2013)]. The twin fraction agrees reasonably well with a semi-analytical model which assumes a critical shear stress for twinning. On reloading, twins disappear and the material reaches a very low strength value.
Extreme weather: Subtropical floods and tropical cyclones
NASA Astrophysics Data System (ADS)
Shaevitz, Daniel A.
Extreme weather events have a large effect on society. As such, it is important to understand these events and to project how they may change in a future, warmer climate. The aim of this thesis is to develop a deeper understanding of two types of extreme weather events: subtropical floods and tropical cyclones (TCs). In the subtropics, the latitude is high enough that quasi-geostrophic dynamics are at least qualitatively relevant, while low enough that moisture may be abundant and convection strong. Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. In the first part of this thesis, I examine the possible triggering of convection by the large-scale dynamics and investigate the coupling between the two. Specifically two examples of extreme precipitation events in the subtropics are analyzed, the 2010 and 2014 floods of India and Pakistan and the 2015 flood of Texas and Oklahoma. I invert the quasi-geostrophic omega equation to decompose the large-scale vertical motion profile to components due to synoptic forcing and diabatic heating. Additionally, I present model results from within the Column Quasi-Geostrophic framework. A single column model and cloud-revolving model are forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. It is found that convection was triggered primarily by mechanically forced orographic ascent over the Himalayas during the India/Pakistan flood and by upper-level Potential Vorticity disturbances during the Texas/Oklahoma flood. Furthermore, a climate attribution analysis was conducted for the Texas/Oklahoma flood and it is found that anthropogenic climate change was responsible for a small amount of rainfall during the event but the intensity of this event may be greatly increased if it occurs in a future climate. In the second part of this thesis, I examine the ability of high-resolution global atmospheric models to simulate TCs. Specifically, I present an intercomparison of several models' ability to simulate the global characteristics of TCs in the current climate. This is a necessary first step before using these models to project future changes in TCs. Overall, the models were able to reproduce the geographic distribution of TCs reasonably well, with some of the models performing remarkably well. The intensity of TCs varied widely between the models, with some of this difference being due to model resolution.
Testing the gravitational instability hypothesis?
NASA Technical Reports Server (NTRS)
Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.
1994-01-01
We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.
Intensive reasoning training alters patterns of brain connectivity at rest
Mackey, Allyson P.; Miller Singley, Alison T.; Bunge, Silvia A.
2013-01-01
Patterns of correlated activity among brain regions reflect functionally relevant networks that are widely assumed to be stable over time. We hypothesized that if these correlations reflect the prior history of co-activation of brain regions, then a marked shift in cognition could alter the strength of coupling between these regions. We sought to test whether intensive reasoning training in humans would result in tighter coupling among regions in the lateral fronto-parietal network, as measured with resting-state fMRI (rs-fMRI). Rather than designing an artificial training program, we studied individuals who were preparing for a standardized test that places heavy demands on relational reasoning, the Law School Admissions Test (LSAT). LSAT questions require test-takers to group or sequence items according to a set of complex rules. We recruited young adults who were enrolled in an LSAT course that offers 70 hours of reasoning instruction (n=25), and age- and IQ-matched controls intending to take the LSAT in the future (n=24). Rs-fMRI data were collected for all subjects during two scanning sessions separated by 90 days. An analysis of pairwise correlations between brain regions implicated in reasoning showed that fronto-parietal connections were strengthened, along with parietal-striatal connections. These findings provide strong evidence for neural plasticity at the level of large-scale networks supporting high-level cognition. PMID:23486950
Making the Business Case for Regional and National Water Data Collection
NASA Astrophysics Data System (ADS)
Pinero, E.
2017-12-01
Water-related risks are becoming more and more of a concern with organizations that either depend on water use or are responsible for water services provision. Yet this concern does not always translate into a business case to support large scale water data collection. One reason is that water demand varies across sectors and physical setting. There is typically no single parameter or reason where a given entity would be interested in national or even regional scale data. Even for public sector entities, water issues are local and their jurisdiction does not span regional scale coverage. Therefore, to make the case for adequate data collection not only are technology and web platforms necessary, but one also needs a compelling business case. One way to make the case will involve raising awareness of the critical cross-cutting role of water such that sectors see the need for water data to support sustainability of other systems, such as energy, food, and resilience. Another factor will be understanding the full life cycle role of water, especially in the supply chain, and that there are many variables that drive water demand. Such an understanding will make clearer the need for more regional scale understanding. This will begin to address the apparent catch 22 that there is a need for data to understand the scope of the challenge, but until the scope of the challenge is understood, there is nno impelling business case to collect data. Examples, such as the Alliance for Water Stewardship standard and CEO Water Mandate Water Action Hub will be discussed to illustrate recent innovations in making a case for efficient collection of watershed scale and regional data.
Coronal mass ejection and solar flare initiation processes without appreciable
NASA Astrophysics Data System (ADS)
Veselovsky, I.
TRACE and SOHO/EIT movies clearly show the cases of the coronal mass ejection and solar flare initiations without noticeable large-scale topology modifications in observed features. Instead of this, the appearance of new intermediate scales is often omnipresent in the erupting region structures when the overall configuration is preserved. Examples of this kind are presented and discussed in the light of the existing magnetic field reconnection paradigms. It is demonstrated that spurious large-scale reconnections and detachments are often produced due to the projection effects in poorly resolved images of twisted loops and sheared arcades especially when deformed parts of them are underexposed and not seen in the images only because of this reason. Other parts, which are normally exposed or overexposed, can make the illusion of "islands" or detached elements in these situations though in reality they preserve the initial magnetic connectivity. Spurious "islands" of this kind could be wrongly interpreted as signatures of topological transitions in the large-scale magnetic fields in many instances described in the vast literature in the past based mainly on fuzzy YOHKOH images, which resulted in the myth about universal solar flare models and the scenario of detached magnetic island formations with new null points in the large scale magnetic field. The better visualization with higher resolution and sensitivity limits allowed to clarify this confusion and to avoid this unjustified interpretation. It is concluded that topological changes obviously can happen in the coronal magnetic fields, but these changes are not always necessary ingredients at least of all coronal mass ejections and solar flares. The scenario of the magnetic field opening is not universal for all ejections. Otherwise, expanding ejections with closed magnetic configurations can be produced by the fast E cross B drifts in strong inductive electric fields, which appear due to the emergence of the new magnetic flux. Corresponding theoretical models are presented and discussed.
NASA Astrophysics Data System (ADS)
Fahnestock, Eugene G.; Yu, Yang; Hamilton, Douglas P.; Schwartz, Stephen; Stickle, Angela; Miller, Paul L.; Cheng, Andy F.; Michel, Patrick; AIDA Impact Simulation Working Group
2016-10-01
The proposed Asteroid Impact Deflection and Assessment (AIDA) mission includes NASA's Double Asteroid Redirection Test (DART), whose impact with the secondary of near-Earth binary asteroid 65803 Didymos is expected to liberate large amounts of ejecta. We present efforts within the AIDA Impact Simulation Working Group to comprehensively simulate the behavior of this impact ejecta as it moves through and exits the system. Group members at JPL, OCA, and UMD have been working largely independently, developing their own strategies and methodologies. Ejecta initial conditions may be imported from output of hydrocode impact simulations or generated from crater scaling laws derived from point-source explosion models. We started with the latter approach, using reasonable assumptions for the secondary's density, porosity, surface cohesive strength, and vanishingly small net gravitational/rotational surface acceleration. We adopted DART's planned size, mass, closing velocity, and impact geometry for the cratering event. Using independent N-Body codes, we performed Monte Carlo integration of ejecta particles sampled over reasonable particle size ranges, and over launch locations within the crater footprint. In some cases we scaled the number of integrated particles in various size bins to the estimated number of particles consistent with a realistic size-frequency distribution. Dynamical models used for the particle integration varied, but all included full gravity potential of both primary and secondary, the solar tide, and solar radiation pressure (accounting for shadowing). We present results for the proportions of ejecta reaching ultimate fates of escape, return impact on the secondary, and transfer impact onto the primary. We also present the time history of reaching those outcomes, i.e., ejecta clearing timescales, and the size-frequency distribution of remaining ejecta at given post-impact durations. We find large numbers of particles remain in the system for several weeks after impact. Clearing timescales are nonlinearly dependent on particle size as expected, such that only the largest ejecta persist longest. We find results are strongly dependent on the local surface geometry at the modeled impact locations.
NASA Astrophysics Data System (ADS)
Weitnauer, Claudia; Beck, Christoph; Jacobeit, Jucundus
2015-04-01
It is a matter of common knowledge that local concentrations of PM10 (fine particles in the air with a medium diameter less than 10 μm) vary with the seasons in Europe. These concentrations are influenced on the one hand by the amount of natural and anthropogenic emissions and on the other hand by large-scale and local meteorological conditions. In Bavaria (part of southern Germany) as the target region of the present study, the PM10 concentrations are particularly high in winter time. One reason for this are increased particle emissions due to domestic heating and traffic load in December, January and February. As several studies in other European regions indicated, a distinct effect of the large-scale synoptic weather situation in winter on local PM10 concentrations should be considered as another reason. The main task of this study is to use seasonal synoptic weather types, which are optimized with respect to daily mean PM10 data at 16 Bavarian cities, and therefore are classified by using daily gridded NCEP/NCAR reanalysis data (2.5° x 2.5° horizontal resolution) for the recent period 1980 - 2011 over a Central European spatial domain, to describe the impact of the large-scale meteorological conditions on the local particle concentrations. The weather types are related to monthly PM10 indices by using different transfer techniques like direct synoptic downscaling, multiple regression and generalized linear models as well as random forests. The PM10 indices are determined by averaging daily to monthly data (PMmean) or by counting the daily exceedances of a particular threshold (> 50 μg/m3, PMe50). The generated transfer models are evaluated in calibration and validation periods using several forecast skills, for example the mean squared skill score (MSSS) or the Heidke Skill Score (HSS). The sufficiently performing models are then applied to weather types derived from future climate change scenarios of the global climate model ECHAM 6 for the IPCC scenarios RCP 4.5 and 8.5 in order to estimate future climate-change induced modifications of local PM10 concentrations in Bavaria.
The effects of magnetic fields and protostellar feedback on low-mass cluster formation
NASA Astrophysics Data System (ADS)
Cunningham, Andrew J.; Krumholz, Mark R.; McKee, Christopher F.; Klein, Richard I.
2018-05-01
We present a large suite of simulations of the formation of low-mass star clusters. Our simulations include an extensive set of physical processes - magnetohydrodynamics, radiative transfer, and protostellar outflows - and span a wide range of virial parameters and magnetic field strengths. Comparing the outcomes of our simulations to observations, we find that simulations remaining close to virial balance throughout their history produce star formation efficiencies and initial mass function (IMF) peaks that are stable in time and in reasonable agreement with observations. Our results indicate that small-scale dissipation effects near the protostellar surface provide a feedback loop for stabilizing the star formation efficiency. This is true regardless of whether the balance is maintained by input of energy from large-scale forcing or by strong magnetic fields that inhibit collapse. In contrast, simulations that leave virial balance and undergo runaway collapse form stars too efficiently and produce an IMF that becomes increasingly top heavy with time. In all cases, we find that the competition between magnetic flux advection towards the protostar and outward advection due to magnetic interchange instabilities, and the competition between turbulent amplification and reconnection close to newly formed protostars renders the local magnetic field structure insensitive to the strength of the large-scale field, ensuring that radiation is always more important than magnetic support in setting the fragmentation scale and thus the IMF peak mass. The statistics of multiple stellar systems are similarly insensitive to variations in the initial conditions and generally agree with observations within the range of statistical uncertainty.
USGS Releases New Digital Aerial Products
,
2005-01-01
The U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) has initiated distribution of digital aerial photographic products produced by scanning or digitizing film from its historical aerial photography film archive. This archive, located in Sioux Falls, South Dakota, contains thousands of rolls of film that contain more than 8 million frames of historic aerial photographs. The largest portion of this archive consists of original film acquired by Federal agencies from the 1930s through the 1970s to produce 1:24,000-scale USGS topographic quadrangle maps. Most of this photography is reasonably large scale (USGS photography ranges from 1:8,000 to 1:80,000) to support the production of the maps. Two digital products are currently available for ordering: high-resolution scanned products and medium-resolution digitized products.
Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe
NASA Astrophysics Data System (ADS)
Pinto, Joaquim G.; Gómara, Iñigo; Masato, Giacomo; Dacre, Helen F.; Woollings, Tim; Caballero, Rodrigo
2015-04-01
Some recent winters in Western Europe have been characterized by the occurrence of multiple extratropical cyclones following a similar path. The occurrence of such cyclone clusters leads to large socio-economic impacts due to damaging winds, storm surges, and floods. Recent studies have statistically characterized the clustering of extratropical cyclones over the North Atlantic and Europe and hypothesized potential physical mechanisms responsible for their formation. Here we analyze 4 months characterized by multiple cyclones over Western Europe (February 1990, January 1993, December 1999, and January 2007). The evolution of the eddy driven jet stream, Rossby wave-breaking, and upstream/downstream cyclone development are investigated to infer the role of the large-scale flow and to determine if clustered cyclones are related to each other. Results suggest that optimal conditions for the occurrence of cyclone clusters are provided by a recurrent extension of an intensified eddy driven jet toward Western Europe lasting at least 1 week. Multiple Rossby wave-breaking occurrences on both the poleward and equatorward flanks of the jet contribute to the development of these anomalous large-scale conditions. The analysis of the daily weather charts reveals that upstream cyclone development (secondary cyclogenesis, where new cyclones are generated on the trailing fronts of mature cyclones) is strongly related to cyclone clustering, with multiple cyclones developing on a single jet streak. The present analysis permits a deeper understanding of the physical reasons leading to the occurrence of cyclone families over the North Atlantic, enabling a better estimation of the associated cumulative risk over Europe.
Impacts of large-scale climatic disturbances on the terrestrial carbon cycle.
Erbrecht, Tim; Lucht, Wolfgang
2006-07-27
The amount of carbon dioxide in the atmosphere steadily increases as a consequence of anthropogenic emissions but with large interannual variability caused by the terrestrial biosphere. These variations in the CO2 growth rate are caused by large-scale climate anomalies but the relative contributions of vegetation growth and soil decomposition is uncertain. We use a biogeochemical model of the terrestrial biosphere to differentiate the effects of temperature and precipitation on net primary production (NPP) and heterotrophic respiration (Rh) during the two largest anomalies in atmospheric CO2 increase during the last 25 years. One of these, the smallest atmospheric year-to-year increase (largest land carbon uptake) in that period, was caused by global cooling in 1992/93 after the Pinatubo volcanic eruption. The other, the largest atmospheric increase on record (largest land carbon release), was caused by the strong El Niño event of 1997/98. We find that the LPJ model correctly simulates the magnitude of terrestrial modulation of atmospheric carbon anomalies for these two extreme disturbances. The response of soil respiration to changes in temperature and precipitation explains most of the modelled anomalous CO2 flux. Observed and modelled NEE anomalies are in good agreement, therefore we suggest that the temporal variability of heterotrophic respiration produced by our model is reasonably realistic. We therefore conclude that during the last 25 years the two largest disturbances of the global carbon cycle were strongly controlled by soil processes rather then the response of vegetation to these large-scale climatic events.
Impacts of large-scale climatic disturbances on the terrestrial carbon cycle
Erbrecht, Tim; Lucht, Wolfgang
2006-01-01
Background The amount of carbon dioxide in the atmosphere steadily increases as a consequence of anthropogenic emissions but with large interannual variability caused by the terrestrial biosphere. These variations in the CO2 growth rate are caused by large-scale climate anomalies but the relative contributions of vegetation growth and soil decomposition is uncertain. We use a biogeochemical model of the terrestrial biosphere to differentiate the effects of temperature and precipitation on net primary production (NPP) and heterotrophic respiration (Rh) during the two largest anomalies in atmospheric CO2 increase during the last 25 years. One of these, the smallest atmospheric year-to-year increase (largest land carbon uptake) in that period, was caused by global cooling in 1992/93 after the Pinatubo volcanic eruption. The other, the largest atmospheric increase on record (largest land carbon release), was caused by the strong El Niño event of 1997/98. Results We find that the LPJ model correctly simulates the magnitude of terrestrial modulation of atmospheric carbon anomalies for these two extreme disturbances. The response of soil respiration to changes in temperature and precipitation explains most of the modelled anomalous CO2 flux. Conclusion Observed and modelled NEE anomalies are in good agreement, therefore we suggest that the temporal variability of heterotrophic respiration produced by our model is reasonably realistic. We therefore conclude that during the last 25 years the two largest disturbances of the global carbon cycle were strongly controlled by soil processes rather then the response of vegetation to these large-scale climatic events. PMID:16930463
Improved microseismic event locations through large-N arrays and wave-equation imaging and inversion
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2016-12-01
The recent increased focus on small-scale seismicity, Mw < 4 has come about primarily for two reasons. First, there is an increase in induced seismicity related to injection operations primarily for wastewater disposal and hydraulic fracturing for oil and gas recovery and for geothermal energy production. While the seismicity associated with injection is sometimes felt, it is more often weak. Some weak events are detected on current sparse arrays; however, accurate location of the events often requires a larger number of (multi-component) sensors. This leads to the second reason for an increased focus on small magnitude seismicity: a greater number of seismometers are being deployed in large N-arrays. The greater number of sensors decreases the detection threshold and therefore significantly increases the number of weak events found. Overall, these two factors bring new challenges and opportunities. Many standard seismological location and inversion techniques are geared toward large, easily identifiable events recorded on a sparse number of stations. However, with large-N arrays we can detect small events by utilizing multi-trace processing techniques, and increased processing power equips us with tools that employ more complete physics for simultaneously locating events and inverting for P- and S-wave velocity structure. We present a method that uses large-N arrays and wave-equation-based imaging and inversion to jointly locate earthquakes and estimate the elastic velocities of the earth. The technique requires no picking and is thus suitable for weak events. We validate the methodology through synthetic and field data examples.
Blazing Signature Filter: a library for fast pairwise similarity comparisons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon-Yong; Fujimoto, Grant M.; Wilson, Ryan
Identifying similarities between datasets is a fundamental task in data mining and has become an integral part of modern scientific investigation. Whether the task is to identify co-expressed genes in large-scale expression surveys or to predict combinations of gene knockouts which would elicit a similar phenotype, the underlying computational task is often a multi-dimensional similarity test. As datasets continue to grow, improvements to the efficiency, sensitivity or specificity of such computation will have broad impacts as it allows scientists to more completely explore the wealth of scientific data. A significant practical drawback of large-scale data mining is the vast majoritymore » of pairwise comparisons are unlikely to be relevant, meaning that they do not share a signature of interest. It is therefore essential to efficiently identify these unproductive comparisons as rapidly as possible and exclude them from more time-intensive similarity calculations. The Blazing Signature Filter (BSF) is a highly efficient pairwise similarity algorithm which enables extensive data mining within a reasonable amount of time. The algorithm transforms datasets into binary metrics, allowing it to utilize the computationally efficient bit operators and provide a coarse measure of similarity. As a result, the BSF can scale to high dimensionality and rapidly filter unproductive pairwise comparison. Two bioinformatics applications of the tool are presented to demonstrate the ability to scale to billions of pairwise comparisons and the usefulness of this approach.« less
Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks
NASA Astrophysics Data System (ADS)
Leube, P.; Nowak, W.; Sanchez-Vila, X.
2013-12-01
High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.
NASA Astrophysics Data System (ADS)
Chen, Xiao; Dong, Gang; Jiang, Hua
2017-04-01
The instabilities of a three-dimensional sinusoidally premixed flame induced by an incident shock wave with Mach = 1.7 and its reshock waves were studied by using the Navier-Stokes (NS) equations with a single-step chemical reaction and a high resolution, 9th-order weighted essentially non-oscillatory scheme. The computational results were validated by the grid independence test and the experimental results in the literature. The computational results show that after the passage of incident shock wave the flame interface develops in symmetric structure accompanied by large-scale transverse vortex structures. After the interactions by successive reshock waves, the flame interface is gradually destabilized and broken up, and the large-scale vortex structures are gradually transformed into small-scale vortex structures. The small-scale vortices tend to be isotropic later. The results also reveal that the evolution of the flame interface is affected by both mixing process and chemical reaction. In order to identify the relationship between the mixing and the chemical reaction, a dimensionless parameter, η , that is defined as the ratio of mixing time scale to chemical reaction time scale, is introduced. It is found that at each interaction stage the effect of chemical reaction is enhanced with time. The enhanced effect of chemical reaction at the interaction stage by incident shock wave is greater than that at the interaction stages by reshock waves. The result suggests that the parameter η can reasonably character the features of flame interface development induced by the multiple shock waves.
NASA Astrophysics Data System (ADS)
Lintner, B. R.; Loikith, P. C.; Pike, M.; Aragon, C.
2017-12-01
Climate change information is increasingly required at impact-relevant scales. However, most state-of-the-art climate models are not of sufficiently high spatial resolution to resolve features explicitly at such scales. This challenge is particularly acute in regions of complex topography, such as the Pacific Northwest of the United States. To address this scale mismatch problem, we consider large-scale meteorological patterns (LSMPs), which can be resolved by climate models and associated with the occurrence of local scale climate and climate extremes. In prior work, using self-organizing maps (SOMs), we computed LSMPs over the northwestern United States (NWUS) from daily reanalysis circulation fields and further related these to the occurrence of observed extreme temperatures and precipitation: SOMs were used to group LSMPs into 12 nodes or clusters spanning the continuum of synoptic variability over the regions. Here this observational foundation is utilized as an evaluation target for a suite of global climate models from the Fifth Phase of the Coupled Model Intercomparison Project (CMIP5). Evaluation is performed in two primary ways. First, daily model circulation fields are assigned to one of the 12 reanalysis nodes based on minimization of the mean square error. From this, a bulk model skill score is computed measuring the similarity between the model and reanalysis nodes. Next, SOMs are applied directly to the model output and compared to the nodes obtained from reanalysis. Results reveal that many of the models have LSMPs analogous to the reanalysis, suggesting that the models reasonably capture observed daily synoptic states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abajyan, T.; Abbott, B.
2013-05-15
A measurement of splitting scales, as defined by the k T clustering algorithm, is presented for final states containing a W boson produced in proton–proton collisions at a centre-of-mass energy of 7 TeV. The measurement is based on the full 2010 data sample corresponding to an integrated luminosity of 36 pb -1 which was collected using the ATLAS detector at the CERN Large Hadron Collider. Cluster splitting scales are measured in events containing W bosons decaying to electrons or muons. The measurement comprises the four hardest splitting scales in a k T cluster sequence of the hadronic activity accompanying themore » W boson, and ratios of these splitting scales. Backgrounds such as multi-jet and top-quark-pair production are subtracted and the results are corrected for detector effects. Predictions from various Monte Carlo event generators at particle level are compared to the data. Overall, reasonable agreement is found with all generators, but larger deviations between the predictions and the data are evident in the soft regions of the splitting scales.« less
Hallman, Guy J; Parker, Andrew C; Blackburn, Carl M
2013-04-01
The pros and cons of a generic phytosanitary irradiation dose against all Lepidoptera pupae on all commodities are discussed. The measure of efficacy is to prevent the F1 generation from hatching (F1 egg hatch) when late pupae are irradiated. More data exist for this measure than for others studied, and it is also commercially tenable (i.e., prevention of adult emergence would require a high dose not tolerated by fresh commodities). The dose required to prevent F1 egg hatch provides a liberal margin of security for various reasons. A point at issue is that correctly irradiated adults could be capable of flight and thus be found in survey traps in importing countries resulting in costly and unnecessary regulatory action. However, this possibility would be rare and should not be a barrier to the adoption of this generic treatment. The literature was thoroughly examined and only studies that could reasonably satisfy criteria of acceptable irradiation and evaluation methodology, proper age of pupae, and adequate presentation of raw data were accepted. Based on studies with 34 species in nine families, we suggest an efficacious dose of 400 Gy. However, large-scale confirmatory testing (> or = 30,000 individuals) has only been reported for one species. A dose as low as 350 Gy might suffice if results of more large-scale studies were available or the measure of efficacy were extended beyond prevention of F1 egg hatch, but data to defend measures of efficacy beyond F1 egg hatch are scarce and more would need to be generated.
Novel quantum phase transition from bounded to extensive entanglement
Zhang, Zhao; Ahmadain, Amr
2017-01-01
The nature of entanglement in many-body systems is a focus of intense research with the observation that entanglement holds interesting information about quantum correlations in large systems and their relation to phase transitions. In particular, it is well known that although generic, many-body states have large, extensive entropy, ground states of reasonable local Hamiltonians carry much smaller entropy, often associated with the boundary length through the so-called area law. Here we introduce a continuous family of frustration-free Hamiltonians with exactly solvable ground states and uncover a remarkable quantum phase transition whereby the entanglement scaling changes from area law into extensively large entropy. This transition shows that entanglement in many-body systems may be enhanced under special circumstances with a potential for generating “useful” entanglement for the purpose of quantum computing and that the full implications of locality and its restrictions on possible ground states may hold further surprises. PMID:28461464
Novel quantum phase transition from bounded to extensive entanglement.
Zhang, Zhao; Ahmadain, Amr; Klich, Israel
2017-05-16
The nature of entanglement in many-body systems is a focus of intense research with the observation that entanglement holds interesting information about quantum correlations in large systems and their relation to phase transitions. In particular, it is well known that although generic, many-body states have large, extensive entropy, ground states of reasonable local Hamiltonians carry much smaller entropy, often associated with the boundary length through the so-called area law. Here we introduce a continuous family of frustration-free Hamiltonians with exactly solvable ground states and uncover a remarkable quantum phase transition whereby the entanglement scaling changes from area law into extensively large entropy. This transition shows that entanglement in many-body systems may be enhanced under special circumstances with a potential for generating "useful" entanglement for the purpose of quantum computing and that the full implications of locality and its restrictions on possible ground states may hold further surprises.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boore, Jeffrey L.
2004-11-27
Although the phylogenetic relationships of many organisms have been convincingly resolved by the comparisons of nucleotide or amino acid sequences, others have remained equivocal despite great effort. Now that large-scale genome sequencing projects are sampling many lineages, it is becoming feasible to compare large data sets of genome-level features and to develop this as a tool for phylogenetic reconstruction that has advantages over conventional sequence comparisons. Although it is unlikely that these will address a large number of evolutionary branch points across the broad tree of life due to the infeasibility of such sampling, they have great potential for convincinglymore » resolving many critical, contested relationships for which no other data seems promising. However, it is important that we recognize potential pitfalls, establish reasonable standards for acceptance, and employ rigorous methodology to guard against a return to earlier days of scenario-driven evolutionary reconstructions.« less
Monitoring of Sea Ice Dynamic by Means of ERS-Envisat Tandem Cross-Interferometry
NASA Astrophysics Data System (ADS)
Pasquali, Paolo; Cantone, Alessio; Barbieri, Massimo; Engdahl, Marcus
2010-03-01
The interest in the monitoring of sea ice masses has increased greatly over the past decades for a variety of reasons. These include:- Navigation in northern latitude waters;- transportation of petroleum;- exploitation of mineral deposits in the Arctic, and- the use of icebergs as a source of fresh water.The availability of ERS-Envisat 28minute tandem acquisitions from dedicated campaigns, covering large areas in the northern latitudes with large geometrical baseline and very short temporal separation, allows the precise estimation of sea ice displacement fields with an accuracy that cannot be obtained on large scale from any other instrument. This article presents different results of sea ice dynamic monitoring over northern Canada obtained within the "ERS-Envisat Tandem Cross-Interferometry Campaigns: CInSAR processing and studies over extended areas" project from data acquired during the 2008-2009 Tandem campaign..
Image Processing for Binarization Enhancement via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor)
2009-01-01
A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.
Mancopes, Renata; Gonçalves, Bruna Franciele da Trindade; Costa, Cintia Conceição; Favero, Talita Cristina; Drozdz, Daniela Rejane Constantino; Bilheri, Diego Fernando Dorneles; Schumacher, Stéfani Fernanda
2014-01-01
To correlate the reason for referral to speech therapy service at a university hospital with the results of clinical and objective assessment of risk for dysphagia. This is a cross-sectional, observational, retrospective analytical and quantitative study. The data were gathered from the database, and the information used was the reason for referral to speech therapy service, results of clinical assessment of the risk for dysphagia, and also from swallowing videofluoroscopy. There was a mean difference between the variables of the reason for the referral, results of the clinical and objective swallowing assessments, and scale of penetration/aspiration, although the values were not statistically significant. Statistically significant correlation was observed between clinical and objective assessments and the penetration scale, with the largest occurring between the results of objective assessment and penetration scale. There was a correlation between clinical and objective assessments of swallowing and mean difference between the variables of the reason for the referral with their respective assessment. This shows the importance of the association between the data of patient's history and results of clinical evaluation and complementary tests, such as videofluoroscopy, for correct identification of the swallowing disorders, being important to combine the use of severity scales of penetration/aspiration for diagnosis.
77 FR 9700 - Large Residential Washers From Korea and Mexico
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-17
...)] Large Residential Washers From Korea and Mexico Determinations On the basis of the record \\1\\ developed... reasonable indication that an industry is materially injured by reason of imports from Mexico of large... imports of large residential washers from Mexico. Accordingly, effective December 30, 2011, the Commission...
Construct Validity and Reliability of College Students' Responses to the Reasons for Smoking Scale
ERIC Educational Resources Information Center
Fiala, Kelly Ann; D'Abundo, Michelle Lee; Marinaro, Laura Marie
2010-01-01
When utilizing self-assessments to determine motives for health behaviors, it is essential that the resulting data demonstrate sound psychometric properties. The purpose of this research was to assess the reliability and construct validity of college students' responses to the Reasons for Smoking Scale (RFS). Confirmatory factor analyses and…
Decision tree rating scales for workload estimation: Theme and variations
NASA Technical Reports Server (NTRS)
Wierwille, W. W.; Skipper, J. H.; Rieger, C. A.
1984-01-01
The Modified Cooper-Harper (MCH) scale which is a sensitive indicator of workload in several different types of aircrew tasks was examined. The study determined if variations of the scale might provide greater sensitivity and the reasons for the sensitivity of the scale. The MCH scale and five newly devised scales were examined in two different aircraft simulator experiments in which pilot loading was treated as an independent variable. It is indicated that while one of the new scales may be more sensitive in a given experiment, task dependency is a problem. The MCH scale exhibits consistent senstivity and remains the scale recommended for general use. The MCH scale results are consistent with earlier experiments. The rating scale experiments are reported and the questionnaire results which were directed to obtain a better understanding of the reasons for the relative sensitivity of the MCH scale and its variations are described.
Decision Tree Rating Scales for Workload Estimation: Theme and Variations
NASA Technical Reports Server (NTRS)
Wietwille, W. W.; Skipper, J. H.; Rieger, C. A.
1984-01-01
The modified Cooper-Harper (MCH) scale has been shown to be a sensitive indicator of workload in several different types of aircrew tasks. The MCH scale was examined to determine if certain variations of the scale might provide even greater sensitivity and to determine the reasons for the sensitivity of the scale. The MCH scale and five newly devised scales were studied in two different aircraft simulator experiments in which pilot loading was treated as an independent variable. Results indicate that while one of the new scales may be more sensitive in a given experiment, task dependency is a problem. The MCH scale exhibits consistent sensitivity and remains the scale recommended for general use. The results of the rating scale experiments are presented and the questionnaire results which were directed at obtaining a better understanding of the reasons for the relative sensitivity of the MCH scale and its variations are described.
2012-01-01
Background Disturbance is an important process structuring ecosystems worldwide and has long been thought to be a significant driver of diversity and dynamics. In forests, most studies of disturbance have focused on large-scale disturbance such as hurricanes or tree-falls. However, smaller sub-canopy disturbances could also have significant impacts on community structure. One such sub-canopy disturbance in tropical forests is abscising leaves of large arborescent palm (Arececeae) trees. These leaves can weigh up to 15 kg and cause physical damage and mortality to juvenile plants. Previous studies examining this question suffered from the use of static data at small spatial scales. Here we use data from a large permanent forest plot combined with dynamic data on the survival and growth of > 66,000 individuals over a seven-year period to address whether falling palm fronds do impact neighboring seedling and sapling communities, or whether there is an interaction between the palms and peccaries rooting for fallen palm fruit in the same area as falling leaves. We tested the wider generalisation of these hypotheses by comparing seedling and sapling survival under fruiting and non-fruiting trees in another family, the Myristicaceae. Results We found a spatially-restricted but significant effect of large arborescent fruiting palms on the spatial structure, population dynamics and species diversity of neighbouring sapling and seedling communities. However, these effects were not found around slightly smaller non-fruiting palm trees, suggesting it is seed predators such as peccaries rather than falling leaves that impact on the communities around palm trees. Conversely, this hypothesis was not supported in data from other edible species, such as those in the family Myristicaceae. Conclusions Given the abundance of arborescent palm trees in Amazonian forests, it is reasonable to conclude that their presence does have a significant, if spatially-restricted, impact on juvenile plants, most likely on the survival and growth of seedlings and saplings damaged by foraging peccaries. Given the abundance of fruit produced by each palm, the widespread effects of these small-scale disturbances appear, over long time-scales, to cause directional changes in community structure at larger scales. PMID:22429883
Queenborough, Simon A; Metz, Margaret R; Wiegand, Thorsten; Valencia, Renato
2012-03-19
Disturbance is an important process structuring ecosystems worldwide and has long been thought to be a significant driver of diversity and dynamics. In forests, most studies of disturbance have focused on large-scale disturbance such as hurricanes or tree-falls. However, smaller sub-canopy disturbances could also have significant impacts on community structure. One such sub-canopy disturbance in tropical forests is abscising leaves of large arborescent palm (Arececeae) trees. These leaves can weigh up to 15 kg and cause physical damage and mortality to juvenile plants. Previous studies examining this question suffered from the use of static data at small spatial scales. Here we use data from a large permanent forest plot combined with dynamic data on the survival and growth of > 66,000 individuals over a seven-year period to address whether falling palm fronds do impact neighboring seedling and sapling communities, or whether there is an interaction between the palms and peccaries rooting for fallen palm fruit in the same area as falling leaves. We tested the wider generalisation of these hypotheses by comparing seedling and sapling survival under fruiting and non-fruiting trees in another family, the Myristicaceae. We found a spatially-restricted but significant effect of large arborescent fruiting palms on the spatial structure, population dynamics and species diversity of neighbouring sapling and seedling communities. However, these effects were not found around slightly smaller non-fruiting palm trees, suggesting it is seed predators such as peccaries rather than falling leaves that impact on the communities around palm trees. Conversely, this hypothesis was not supported in data from other edible species, such as those in the family Myristicaceae. Given the abundance of arborescent palm trees in Amazonian forests, it is reasonable to conclude that their presence does have a significant, if spatially-restricted, impact on juvenile plants, most likely on the survival and growth of seedlings and saplings damaged by foraging peccaries. Given the abundance of fruit produced by each palm, the widespread effects of these small-scale disturbances appear, over long time-scales, to cause directional changes in community structure at larger scales.
The effect of wind tunnel wall interference on the performance of a fan-in-wing VTOL model
NASA Technical Reports Server (NTRS)
Heyson, H. H.
1974-01-01
A fan-in-wing model with a 1.07-meter span was tested in seven different test sections with cross-sectional areas ranging from 2.2 sq meters to 265 sq meters. The data from the different test sections are compared both with and without correction for wall interference. The results demonstrate that extreme care must be used in interpreting uncorrected VTOL data since the wall interference may be so large as to invalidate even trends in the data. The wall interference is particularly large at the tail, a result which is in agreement with recently published comparisons of flight and large scale wind tunnel data for a propeller-driven deflected-slipstream configuration. The data verify the wall-interference theory even under conditions of extreme interference. A method yields reasonable estimates for the onset of Rae's minimum-speed limit. The rules for choosing model sizes to produce negligible wall effects are considerably in error and permit the use of excessively large models.
McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.
NASA Astrophysics Data System (ADS)
Kiani, Keivan
2017-09-01
Large deformation regime of micro-scale slender beam-like structures subjected to axially pointed loads is of high interest to nanotechnologists and applied mechanics community. Herein, size-dependent nonlinear governing equations are derived by employing modified couple stress theory. Under various boundary conditions, analytical relations between axially applied loads and deformations are presented. Additionally, a novel Galerkin-based assumed mode method (AMM) is established to solve the highly nonlinear equations. In some particular cases, the predicted results by the analytical approach are also checked with those of AMM and a reasonably good agreement is reported. Subsequently, the key role of the material length scale on the load-deformation of microbeams is discussed and the deficiencies of the classical elasticity theory in predicting such a crucial mechanical behavior are explained in some detail. The influences of slenderness ratio and thickness of the microbeam on the obtained results are also examined. The present work could be considered as a pivotal step in better realizing the postbuckling behavior of nano-/micro- electro-mechanical systems consist of microbeams.
Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology
Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob
2014-01-01
One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)
1996-01-01
The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.
Numerical Simulation of the Large-Scale North American Monsoon Water Sources
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Sud, Yogesh C.; Schubert, Siegfried D.; Walker, Gregory K.
2002-01-01
A general circulation model (GCM) that includes water vapor tracer (WVT) diagnostics is used to delineate the dominant sources of water vapor for precipitation during the North American monsoon. A 15-year model simulation carried out with one-degree horizontal resolution and time varying sea surface temperature is able to produce reasonable large-scale features of the monsoon precipitation. Within the core of the Mexican monsoon, continental sources provide much of the water for precipitation. Away from the Mexican monsoon (eastern Mexico and Texas), continental sources generally decrease with monsoon onset. Tropical Atlantic Ocean sources of water gain influence in the southern Great Plains states where the total precipitation decreases during the monsoon onset. Pacific ocean sources do contribute to the monsoon, but tend to be weaker after onset. Evaluating the development of the monsoons, soil water and surface evaporation prior to monsoon onset do not correlate with the eventual monsoon intensity. However, the most intense monsoons do use more local sources of water than the least intense monsoons, but only after the onset. This suggests that precipitation recycling is an important factor in monsoon intensity.
NASA Astrophysics Data System (ADS)
Sakata, Yasuyo
The survey of interview, resource acquisition, photographic operation, and questionnaire were carried out in the “n” Community in the “y” District in Hakusan City in Ishikawa Prefecture to investigate the actual condition of paddy field levee maintenance in the area where land-renting market was proceeding, large-scale farming was dominant, and the problems of geographically scattered farm-land existed. In the study zone, 1) an agricultural production legal person rent-cultivated some of the paddy fields and maintained the levees, 2) another agricultural production legal person rent-cultivated some of the soy bean fields for crop changeover and land owners maintained the levees. The results indicated that sufficient maintenance was executed on the levees of the paddy fields cultivated by the agricultural production legal person, the soy bean fields for crop changeover, and the paddy fields cultivated by the land owners. Each reason is considered to be the managerial strategy, the economic incentive, the mutual monitoring and cross-regulatory mechanism, etc.
NASA Astrophysics Data System (ADS)
Bauschlicher, Charles W.; Langhoff, Stephen R.
1997-07-01
The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Møller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral polycyclic aromatic hydrocarbons (PAHs) where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.
Neural ensemble communities: open-source approaches to hardware for large-scale electrophysiology.
Siegle, Joshua H; Hale, Gregory J; Newman, Jonathan P; Voigts, Jakob
2015-06-01
One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is 'open' or 'closed': that is, whether or not the system's schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Community and occupational health concerns in pork production: a review.
Donham, K J
2010-04-01
Public concerns relative to adverse consequences of large-scale livestock production have been increasingly voiced since the late 1960s. Numerous regional, national, and international conferences have been held on the subject since 1994. This paper provides a review of the literature on the community and occupational health concerns of large-scale livestock production with a focus on pork production. The industry has recognized the concerns of the public, and the national and state pork producer groups are including these issues as an important component of their research and policy priorities. One reason large-scale livestock production has raised concern is that a significant component of the industry has separated from traditional family farming and has developed like other industries in management, structure, and concentration. The magnitude of the problem cited by environmental groups has often been criticized by the pork production industry for lack of science-based evidence to document environmental concerns. In addition to general environmental concerns, occupational health of workers has become more relevant because many operations now are employing more than 10 employees, which brings many operations in the United States under the scrutiny of the US Occupational Safety and Health Administration. In this paper, the scientific literature is reviewed relative to the science basis of occupational and environmental impacts on community and worker health. Further, recommendations are made to help promote sustainability of the livestock industry within the context of maintaining good stewardship of our environmental and human capital.
Shapes of strong shock fronts in an inhomogeneous solar wind
NASA Technical Reports Server (NTRS)
Heinemann, M. A.; Siscoe, G. L.
1974-01-01
The shapes expected for solar-flare-produced strong shock fronts in the solar wind have been calculated, large-scale variations in the ambient medium being taken into account. It has been shown that for reasonable ambient solar wind conditions the mean and the standard deviation of the east-west shock normal angle are in agreement with experimental observations including shocks of all strengths. The results further suggest that near a high-speed stream it is difficult to distinguish between corotating shocks and flare-associated shocks on the basis of the shock normal alone. Although the calculated shapes are outside the range of validity of the linear approximation, these results indicate that the variations in the ambient solar wind may account for large deviations of shock normals from the radial direction.
Research on TCP/IP network communication based on Node.js
NASA Astrophysics Data System (ADS)
Huang, Jing; Cai, Lixiong
2018-04-01
In the face of big data, long connection and high synchronization, TCP/IP network communication will cause performance bottlenecks due to its blocking multi-threading service model. This paper presents a method of TCP/IP network communication protocol based on Node.js. On the basis of analyzing the characteristics of Node.js architecture and asynchronous non-blocking I/O model, the principle of its efficiency is discussed, and then compare and analyze the network communication model of TCP/IP protocol to expound the reasons why TCP/IP protocol stack is widely used in network communication. Finally, according to the large data and high concurrency in the large-scale grape growing environment monitoring process, a TCP server design based on Node.js is completed. The results show that the example runs stably and efficiently.
Hail statistics for Germany derived from single-polarization radar data
NASA Astrophysics Data System (ADS)
Puskeiler, Marc; Kunz, Michael; Schmidberger, Manuel
2016-09-01
Despite the considerable damage potential related to severe hailstorms, knowledge about the local hail probability in Germany is very limited. Constructing a reliable hail probability map is challenging due largely to the lack of direct hail observations. In our study, we suggest a reasonable method by which to estimate hail signals from 3D radar reflectivity measured by conventional single-polarization radars between 2005 and 2011. Evaluating the radar-derived hail days with loss data from a building and an agricultural insurance company confirmed the reliability of the method and the results as expressed, for example, by a Heidke Skill Score HSS of 0.7. Overall, radar-derived hail days demonstrate very high spatial variability, which reflects the local-scale nature of deep moist convection. Nonetheless, systematic patterns related to climatic conditions and orography can also be observed. On the large scale, the number of hail days substantially increases from north to south, which may plausibly be explained by the higher thermal instability in the south. At regional and local scales, several hot spots with elevated hail frequency can be identified, in most cases downstream of the mountains. Several other characteristics including convective energy related to the events identified, differences in track lengths, and seasonal cycles are discussed.
Living laboratory: whole-genome sequencing as a learning healthcare enterprise.
Angrist, M; Jamal, L
2015-04-01
With the proliferation of affordable large-scale human genomic data come profound and vexing questions about management of such data and their clinical uncertainty. These issues challenge the view that genomic research on human beings can (or should) be fully segregated from clinical genomics, either conceptually or practically. Here, we argue that the sharp distinction between clinical care and research is especially problematic in the context of large-scale genomic sequencing of people with suspected genetic conditions. Core goals of both enterprises (e.g. understanding genotype-phenotype relationships; generating an evidence base for genomic medicine) are more likely to be realized at a population scale if both those ordering and those undergoing sequencing for diagnostic reasons are routinely and longitudinally studied. Rather than relying on expensive and lengthy randomized clinical trials and meta-analyses, we propose leveraging nascent clinical-research hybrid frameworks into a broader, more permanent instantiation of exploratory medical sequencing. Such an investment could enlighten stakeholders about the real-life challenges posed by whole-genome sequencing, such as establishing the clinical actionability of genetic variants, returning 'off-target' results to families, developing effective service delivery models and monitoring long-term outcomes. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Crasemann, Berit; Handorf, Dörthe; Jaiser, Ralf; Dethloff, Klaus; Nakamura, Tetsu; Ukita, Jinro; Yamazaki, Koji
2017-12-01
In the framework of atmospheric circulation regimes, we study whether the recent Arctic sea ice loss and Arctic Amplification are associated with changes in the frequency of occurrence of preferred atmospheric circulation patterns during the extended winter season from December to March. To determine regimes we applied a cluster analysis to sea-level pressure fields from reanalysis data and output from an atmospheric general circulation model. The specific set up of the two analyzed model simulations for low and high ice conditions allows for attributing differences between the simulations to the prescribed sea ice changes only. The reanalysis data revealed two circulation patterns that occur more frequently for low Arctic sea ice conditions: a Scandinavian blocking in December and January and a negative North Atlantic Oscillation pattern in February and March. An analysis of related patterns of synoptic-scale activity and 2 m temperatures provides a synoptic interpretation of the corresponding large-scale regimes. The regimes that occur more frequently for low sea ice conditions are resembled reasonably well by the model simulations. Based on those results we conclude that the detected changes in the frequency of occurrence of large-scale circulation patterns can be associated with changes in Arctic sea ice conditions.
Deutsch, Anne-Marie; Lande, R Gregory
2017-07-01
Military suicide rates have been rising over the past decade and continue to challenge military treatment facilities. Assessing suicide risk and improving treatments are a large part of the mission for clinicians who work with uniformed service members. This study attempts to expand the toolkit of military suicide prevention by focusing on protective factors over risk factors. In 1983, Marsha Linehan published a checklist called the Reasons for Living Scale, which asked subjects to check the reasons they choose to continue living, rather than choosing suicide. The authors of this article hypothesized that military service members may have different or additional reasons to live which may relate to their military service. They created a new version of Linehan's inventory by adding protective factors related to military life. The purpose of these additions was to make the inventory more acceptable and relevant to the military population, as well as to identify whether these items constitute a separate subscale as distinguished from previously identified factors. A commonly used assessment tool, the Reasons for Living Inventory (RFL) designed by Marsha Linehan, was expanded to offer items geared to the military population. The RFL presents users with a list of items which may be reasons to not commit suicide (e.g., "I have a responsibility and commitment to my family"). The authors used focus groups of staff and patients in a military psychiatric partial hospitalization program to identify military-centric reasons to live. This process yielded 20 distinct items which were added to Linehan's original list of 48. This expanded list became the Reasons for Living-Military Version. A sample of 200 patients in the military partial hospitalization program completed the inventory at time of or close to admission. This study was approved by the Institutional Review Board at Walter Reed National Military Center for adhering to ethical principles related to pursuing research with human subjects. The rotated factor matrix revealed six factors that have been labeled as follows: Survival and Coping Beliefs, Military Values, Responsibility to Family, Fear of Suicide/Disability/Unknown, Moral Objections and Child-Related Concerns. The subscale of Military Values is a new factor reflecting the addition of military items to the original RFL. Results suggest that formally assessing protective factors in a military psychiatric population has potential as a useful tool in the prevention of military suicide and therefore warrants further research. The latent factor we have entitled "Military Values" may help identify those service members for whom military training or "esprit de corps" is a reason for living. Further research can focus on further validation, pre/post-treatment effects on scores, expanded clinical use to stimulate increased will to live, or evaluation of whether scores on this scale, or the subscale of Military Values, can predict future suicidal behavior by service members. Finally, a larger sample size may produce more robust results to support these findings. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Carlozzi, Noelle E; Kirsch, Ned L; Kisala, Pamela A; Tulsky, David S
2015-01-01
This study examined the clinical utility of the Wechsler Adult Intelligence Scales-Fourth Edition (WAIS-IV) in individuals with complicated mild, moderate or severe TBI. One hundred individuals with TBI (n = 35 complicated mild or moderate TBI; n = 65 severe TBI) and 100 control participants matched on key demographic variables from the WAIS-IV normative dataset completed the WAIS-IV. Univariate analyses indicated that participants with severe TBI had poorer performance than matched controls on all index scores and subtests (except Matrix Reasoning). Individuals with complicated mild/moderate TBI performed more poorly than controls on the Working Memory Index (WMI), Processing Speed Index (PSI), and Full Scale IQ (FSIQ), and on four subtests: the two processing speed subtests (SS, CD), two working memory subtests (AR, LN), and a perceptual reasoning subtest (BD). Participants with severe TBI had significantly lower scores than the complicated mild/moderate TBI on PSI, and on three subtests: the two processing speed subtests (SS and CD), and the new visual puzzles test. Effect sizes for index and subtest scores were generally small-to-moderate for the group with complicated mild/moderate and moderate-to-large for the group with severe TBI. PSI also showed good sensitivity and specificity for classifying individuals with severe TBI versus controls. Findings provide support for the clinical utility of the WAIS-IV in individuals with complicated mild, moderate, and severe TBI.
Navier-Stokes computations useful in aircraft design
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1990-01-01
Large scale Navier-Stokes computations about aircraft components as well as reasonably complete aircraft configurations are presented and discussed. Speed and memory requirements are described for various general problem classes, which in some cases are already being used in the industrial design environment. Recent computed results, with experimental comparisons when available, are included to highlight the presentation. Finally, prospects for the future are described and recommendations for areas of concentrated research are indicated. The future of Navier-Stokes computations is seen to be rapidly expanding across a broad front of applications, which includes the entire subsonic-to-hypersonic speed regime.
Reinforcing loose foundation stones in trait-based plant ecology.
Shipley, Bill; De Bello, Francesco; Cornelissen, J Hans C; Laliberté, Etienne; Laughlin, Daniel C; Reich, Peter B
2016-04-01
The promise of "trait-based" plant ecology is one of generalized prediction across organizational and spatial scales, independent of taxonomy. This promise is a major reason for the increased popularity of this approach. Here, we argue that some important foundational assumptions of trait-based ecology have not received sufficient empirical evaluation. We identify three such assumptions and, where possible, suggest methods of improvement: (i) traits are functional to the degree that they determine individual fitness, (ii) intraspecific variation in functional traits can be largely ignored, and (iii) functional traits show general predictive relationships to measurable environmental gradients.
Social-psychological correlates of drug use among Colombian university students.
Marin, G
1976-01-01
This paper reports the results of a large-scale study conducted among Colombian college students directed at finding patterns of use of legal and illegal drugs. The subjects reported reasons for use and nonuse of the different substances, use on the part of parents, parent's attitudes toward use, effects, attitudes toward drug use, and a series of sociodemographic variables. The results were later analyzed in relation to social learning variables that could differentiate marijuana users and nonusers. Results are also reported on personality differences between users and nonusers as measured by the Maudsley Personality Inventory.
Magnetospheric convection during quiet or moderately disturbed times
NASA Technical Reports Server (NTRS)
Caudal, G.; Blanc, M.
1988-01-01
The processes which contribute to the large-scale plasma circulation in the earth's environment during quiet times, or during reasonable stable magnetic conditions are reviewed. The various sources of field-aligned current generation in the solar wind and the magnetosphere are presented. The generation of field-aligned currents on open field lines connected to either polar cap and the generation of closed field lines of the inner magnetosphere are examined. Consideration is given to the hypothesis of Caudal (1987) that loss processes of trapped particles are competing with adiabatic motions in the generation of field-aligned currents in the inner magnetosphere.
COBE DMR-normalized open inflation cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.
1995-01-01
A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.
Nonthermal steady states after an interaction quench in the Falicov-Kimball model.
Eckstein, Martin; Kollar, Marcus
2008-03-28
We present the exact solution of the Falicov-Kimball model after a sudden change of its interaction parameter using nonequilibrium dynamical mean-field theory. For different interaction quenches between the homogeneous metallic and insulating phases the system relaxes to a nonthermal steady state on time scales on the order of variant Planck's over 2pi/bandwidth, showing collapse and revival with an approximate period of h/interaction if the interaction is large. We discuss the reasons for this behavior and provide a statistical description of the final steady state by means of generalized Gibbs ensembles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Michael L.; Kaneshige, Michael J.; Erikson, William W.
In this study, we have made reasonable cookoff predictions of large-scale explosive systems by using pressure-dependent kinetics determined from small-scale experiments. Scale-up is determined by properly accounting for pressure generated from gaseous decomposition products and the volume that these reactive gases occupy, e.g. trapped within the explosive, the system, or vented. The pressure effect on the decomposition rates has been determined for different explosives by using both vented and sealed experiments at low densities. Low-density explosives are usually permeable to decomposition gases and can be used in both vented and sealed configurations to determine pressure-dependent reaction rates. In contrast, explosivesmore » that are near the theoretical maximum density (TMD) are not as permeable to decomposition gases, and pressure-dependent kinetics are difficult to determine. Ignition in explosives at high densities can be predicted by using pressure-dependent rates determined from the low-density experiments as long as gas volume changes associated with bulk thermal expansion are also considered. In the current work, cookoff of the plastic-bonded explosives PBX 9501 and PBX 9502 is reviewed and new experimental work on LX-14 is presented. Reactive gases are formed inside these heated explosives causing large internal pressures. The pressure is released differently for each of these explosives. For PBX 9501, permeability is increased and internal pressure is relieved as the nitroplasticizer melts and decomposes. Internal pressure in PBX 9502 is relieved as the material is damaged by cracks and spalling. For LX-14, internal pressure is not relieved until the explosive thermally ignites. The current paper is an extension of work presented at the 26th ICDERS symposium [1].« less
Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics
NASA Astrophysics Data System (ADS)
Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane
2014-10-01
This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...
Scale-Resolving simulations (SRS): How much resolution do we really need?
NASA Astrophysics Data System (ADS)
Pereira, Filipe M. S.; Girimaji, Sharath
2017-11-01
Scale-resolving simulations (SRS) are emerging as the computational approach of choice for many engineering flows with coherent structures. The SRS methods seek to resolve only the most important features of the coherent structures and model the remainder of the flow field with canonical closures. With reference to a typical Large-Eddy Simulation (LES), practical SRS methods aim to resolve a considerably narrower range of scales (reduced physical resolution) to achieve an adequate degree of accuracy at reasonable computational effort. While the objective of SRS is well-founded, the criteria for establishing the optimal degree of resolution required to achieve an acceptable level of accuracy are not clear. This study considers the canonical case of the flow around a circular cylinder to address the issue of `optimal' resolution. Two important criteria are developed. The first condition addresses the issue of adequate resolution of the flow field. The second guideline provides an assessment of whether the modeled field is canonical (stochastic) turbulence amenable to closure-based computations.
The reasons for betel-quid chewing scale: assessment of factor structure, reliability, and validity
2014-01-01
Background Despite the fact that betel-quid is one of the most commonly used psychoactive substances worldwide and a major risk-factor for head-and-neck cancer incidence and mortality globally, currently no standardized instrument is available to assess the reasons why individuals chew betel-quid. A measure to assess reasons for chewing betel-quid could help researchers and clinicians develop prevention and treatment strategies. In the current study, we sought to develop and evaluate a self-report instrument for assessing the reasons for chewing betel quid which contributes toward the goal of developing effective interventions to reduce betel quid chewing in vulnerable populations. Methods The current study assessed the factor structure, reliability and convergent validity of the Reasons for Betel-quid Chewing Scale (RBCS), a newly developed 10 item measure adapted from several existing “reasons for smoking” scales. The measure was administered to 351 adult betel-quid chewers in Guam. Results Confirmatory factor analysis of this measure revealed a three factor structure: reinforcement, social/cultural, and stimulation. Further tests revealed strong support for the internal consistency and convergent validity of this three factor measure. Conclusion The goal of designing an intervention to reduce betel-quid chewing necessitates an understanding of why chewers chew; the current study makes considerable contributions towards that objective. PMID:24889863
The reasons for betel-quid chewing scale: assessment of factor structure, reliability, and validity.
Little, Melissa A; Pokhrel, Pallav; Murphy, Kelle L; Kawamoto, Crissy T; Suguitan, Gil S; Herzog, Thaddeus A
2014-06-03
Despite the fact that betel-quid is one of the most commonly used psychoactive substances worldwide and a major risk-factor for head-and-neck cancer incidence and mortality globally, currently no standardized instrument is available to assess the reasons why individuals chew betel-quid. A measure to assess reasons for chewing betel-quid could help researchers and clinicians develop prevention and treatment strategies. In the current study, we sought to develop and evaluate a self-report instrument for assessing the reasons for chewing betel quid which contributes toward the goal of developing effective interventions to reduce betel quid chewing in vulnerable populations. The current study assessed the factor structure, reliability and convergent validity of the Reasons for Betel-quid Chewing Scale (RBCS), a newly developed 10 item measure adapted from several existing "reasons for smoking" scales. The measure was administered to 351 adult betel-quid chewers in Guam. Confirmatory factor analysis of this measure revealed a three factor structure: reinforcement, social/cultural, and stimulation. Further tests revealed strong support for the internal consistency and convergent validity of this three factor measure. The goal of designing an intervention to reduce betel-quid chewing necessitates an understanding of why chewers chew; the current study makes considerable contributions towards that objective.
Investigations on the Bundle Adjustment Results from Sfm-Based Software for Mapping Purposes
NASA Astrophysics Data System (ADS)
Lumban-Gaol, Y. A.; Murtiyoso, A.; Nugroho, B. H.
2018-05-01
Since its first inception, aerial photography has been used for topographic mapping. Large-scale aerial photography contributed to the creation of many of the topographic maps around the world. In Indonesia, a 2013 government directive on spatial management has re-stressed the need for topographic maps, with aerial photogrammetry providing the main method of acquisition. However, the large need to generate such maps is often limited by budgetary reasons. Today, SfM (Structure-from-Motion) offers quicker and less expensive solutions to this problem. However, considering the required precision for topographic missions, these solutions need to be assessed to see if they provide enough level of accuracy. In this paper, a popular SfM-based software Agisoft PhotoScan is used to perform bundle adjustment on a set of large-scale aerial images. The aim of the paper is to compare its bundle adjustment results with those generated by more classical photogrammetric software, namely Trimble Inpho and ERDAS IMAGINE. Furthermore, in order to provide more bundle adjustment statistics to be compared, the Damped Bundle Adjustment Toolbox (DBAT) was also used to reprocess the PhotoScan project. Results show that PhotoScan results are less stable than those generated by the two photogrammetric software programmes. This translates to lower accuracy, which may impact the final photogrammetric product.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
NASA Astrophysics Data System (ADS)
Yu, Lejiang; Zhong, Shiyuan; Pei, Lisi; Bian, Xindi; Heilman, Warren E.
2016-04-01
The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for severe flooding over a large region, little is known about how extreme precipitation events that cause flash flooding and occur at sub-daily time scales have changed over time. Here we use the observed hourly precipitation from the North American Land Data Assimilation System Phase 2 forcing datasets to determine trends in the frequency of extreme precipitation events of short (1 h, 3 h, 6 h, 12 h and 24 h) duration for the period 1979-2013. The results indicate an increasing trend in the central and eastern US. Over most of the western US, especially the Southwest and the Intermountain West, the trends are generally negative. These trends can be largely explained by the interdecadal variability of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation (AMO), with the AMO making a greater contribution to the trends in both warm and cold seasons.
Analyzing large-scale proteomics projects with latent semantic indexing.
Klie, Sebastian; Martens, Lennart; Vizcaíno, Juan Antonio; Côté, Richard; Jones, Phil; Apweiler, Rolf; Hinneburg, Alexander; Hermjakob, Henning
2008-01-01
Since the advent of public data repositories for proteomics data, readily accessible results from high-throughput experiments have been accumulating steadily. Several large-scale projects in particular have contributed substantially to the amount of identifications available to the community. Despite the considerable body of information amassed, very few successful analyses have been performed and published on this data, leveling off the ultimate value of these projects far below their potential. A prominent reason published proteomics data is seldom reanalyzed lies in the heterogeneous nature of the original sample collection and the subsequent data recording and processing. To illustrate that at least part of this heterogeneity can be compensated for, we here apply a latent semantic analysis to the data contributed by the Human Proteome Organization's Plasma Proteome Project (HUPO PPP). Interestingly, despite the broad spectrum of instruments and methodologies applied in the HUPO PPP, our analysis reveals several obvious patterns that can be used to formulate concrete recommendations for optimizing proteomics project planning as well as the choice of technologies used in future experiments. It is clear from these results that the analysis of large bodies of publicly available proteomics data by noise-tolerant algorithms such as the latent semantic analysis holds great promise and is currently underexploited.
Open-air direct current plasma jet: Scaling up, uniformity, and cellular control
NASA Astrophysics Data System (ADS)
Wu, S.; Wang, Z.; Huang, Q.; Lu, X.; Ostrikov, K.
2012-10-01
Atmospheric-pressure plasma jets are commonly used in many fields from medicine to nanotechnology, yet the issue of scaling the discharges up to larger areas without compromising the plasma uniformity remains a major challenge. In this paper, we demonstrate a homogenous cold air plasma glow with a large cross-section generated by a direct current power supply. There is no risk of glow-to-arc transitions, and the plasma glow appears uniform regardless of the gap between the nozzle and the surface being processed. Detailed studies show that both the position of the quartz tube and the gas flow rate can be used to control the plasma properties. Further investigation indicates that the residual charges trapped on the inner surface of the quartz tube may be responsible for the generation of the air plasma plume with a large cross-section. The spatially resolved optical emission spectroscopy reveals that the air plasma plume is uniform as it propagates out of the nozzle. The remarkable improvement of the plasma uniformity is used to improve the bio-compatibility of a glass coverslip over a reasonably large area. This improvement is demonstrated by a much more uniform and effective attachment and proliferation of human embryonic kidney 293 (HEK 293) cells on the plasma-treated surface.
Predicting the cosmological constant with the scale-factor cutoff measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Simone, Andrea; Guth, Alan H.; Salem, Michael P.
2008-09-15
It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less
Lateral fluid flow in a compacting sand-shale sequence: South Caspian basin.
Bredehoeft, J.D.; Djevanshir, R.D.; Belitz, K.R.
1988-01-01
The South Caspian basin contains both sands and shales that have pore-fluid pressures substantially in excess of hydrostatic fluid pressure. Pore-pressure data from the South Caspian basin demonstrate that large differences in excess hydraulic head exist between sand and shale. The data indicate that sands are acting as drains for overlying and underlying compacting shales and that fluid flows laterally through the sand on a regional scale from the basin interior northward to points of discharge. The major driving force for the fluid movement is shale compaction. We present a first- order mathematical analysis in an effort to test if the permeability of the sands required to support a regional flow system is reasonable. The results of the analysis suggest regional sand permeabilities ranging from 1 to 30 md; a range that seems reasonable. This result supports the thesis that lateral fluid flow is occurring on a regional scale within the South Caspian basin. If vertical conduits for flow exist within the basin, they are sufficiently impermeable and do not provide a major outlet for the regional flow system. The lateral fluid flow within the sands implies that the stratigraphic sequence is divided into horizontal units that are hydraulically isolated from one another, a conclusion that has important implications for oil and gas migration.-Authors
Immobile Robots: AI in the New Millennium
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Nayak, P. Pandurang
1996-01-01
A new generation of sensor rich, massively distributed, autonomous systems are being developed that have the potential for profound social, environmental, and economic change. These include networked building energy systems, autonomous space probes, chemical plant control systems, satellite constellations for remote ecosystem monitoring, power grids, biosphere-like life support systems, and reconfigurable traffic systems, to highlight but a few. To achieve high performance, these immobile robots (or immobots) will need to develop sophisticated regulatory and immune systems that accurately and robustly control their complex internal functions. To accomplish this, immobots will exploit a vast nervous system of sensors to model themselves and their environment on a grand scale. They will use these models to dramatically reconfigure themselves in order to survive decades of autonomous operations. Achieving these large scale modeling and configuration tasks will require a tight coupling between the higher level coordination function provided by symbolic reasoning, and the lower level autonomic processes of adaptive estimation and control. To be economically viable they will need to be programmable purely through high level compositional models. Self modeling and self configuration, coordinating autonomic functions through symbolic reasoning, and compositional, model-based programming are the three key elements of a model-based autonomous systems architecture that is taking us into the New Millennium.
Stone, Amanda M; Merlo, Lisa J
2011-02-01
Mental illness stigma remains a significant barrier to treatment. However, the recent increase in the medical and nonmedical use of prescription psychiatric medications among college students seems to contradict this phenomenon. This study explored students' attitudes and experiences related to psychiatric medications, as well as correlates of psychiatric medication misuse (ie, attitudes toward mental illness and beliefs about the efficacy of psychiatric medications). Data were collected anonymously via self-report questionnaires from April 2008 to February 2009. Measures included the Michigan Alcoholism Screening Test, the Drug Abuse Screening Test, Day's Mental Illness Stigma Scale, the Attitudes Toward Psychiatric Medication scale, and the Psychiatric Medication Attitudes Scale. Participants included 383 university students (59.2% female), recruited on the campus of a large state university or through online classes offered through the same university. High rates of psychiatric medication misuse were shown (13.8%) when compared to rates of medical use (6.8%), and students with prescriptions for psychiatric drugs were also more likely to be misusers (χ(2) = 20.60, P < .001). Psychiatric medication misusers reported less stigmatized beliefs toward mental illness, including lower anxiety around the mentally ill (t = 3.26, P < .001) as well as more favorable attitudes toward psychiatric medications (t = 2.78, P < .01) and stronger beliefs in the potential for recovery from mental illness (t = -2.11, P < .05). Students with more stigmatized beliefs had greater concerns about psychiatric medications and less favorable beliefs regarding their effectiveness. Reasons for misuse varied by medication class, with 57.1% of stimulant misusers noting help with studying as their primary reason for use and 33.3% of benzodiazepine misusers noting attempts to get high or "party" as their primary reason for misuse. Results suggest the need for improved education regarding the nature of mental illness, the appropriate use of psychiatric medications, and the potential consequences associated with abuse of these potent drugs. © Copyright 2011 Physicians Postgraduate Press, Inc.
Upscaling Bedrock Erosion Laws from the Point to the Patch and from the Event to the Year
NASA Astrophysics Data System (ADS)
Beer, A. R.; Turowski, J. M.
2017-12-01
Bedrock erosion depends on the interactions between the bedload tools and cover effects. However, it is unclear (i) how well long-term calibrations of existing erosion models can predict individual erosion events, and (ii) whether at-a-point event calibrations can be spatio-temporally upscaled. Here, we evaluate the performance of at-a-point calibrated erosion models by scaling their erosional efficiency coefficients (k-factors). We use continuous measurements of water discharge and bedload transport at 1- minute resolution, supplemented by repeated sub-millimeter-resolution spatial erosion surveys of a concrete slab in a small Swiss pre-alpine stream. Our results confirm the linear dependency of bedrock abrasion on sediment flux under sediment-starved conditions integrated over space (the 0.2m2 slab surface) and time (20 months). The predictive quality of the commonly applied unit stream power (USP) model is strongly susceptible to bedload transport distribution, whereas the bedload-dependent tools-only model yields more reasonable results. Applying the fitted mean model k-factors to a 16-year, 1-minute-resolution time series of discharge and bedload transport shows that the excess USP model EUSP (which includes a discharge threshold for bedload transport) generally predicts cumulative erosion reasonably well. For exceptional events, however, the EUSP model fails to predict the resulting large erosion rates. Hence, for sediment-starved conditions, event-based erosion model calibration can be applied over larger spatio-temporal scales with stationary k-factors, if a discharge threshold for sediment transport is taken into account. The EUSP model is a surrogate to predict long-term erosion given average erosive events, but fails to capture large event erosion rates. Consequently, the erosion tendency during average erosive events is generally matched by overall EUSP modelling, but large and highly erosive events are underpredicted. In such, water discharge does not account for the non-linearity in sediment availability (e.g., due to sudden release of interlocked sediment from the streambed) and in grain impact energies on the bedrock (i.e., large grain impacts dominate total erosion), which are the main drivers of a bedrock channel's morphology.
Decadal opportunities for space architects
NASA Astrophysics Data System (ADS)
Sherwood, Brent
2012-12-01
A significant challenge for the new field of space architecture is the dearth of project opportunities. Yet every year more young professionals express interest to enter the field. This paper derives projections that bound the number, type, and range of global development opportunities that may be reasonably expected over the next few decades for human space flight (HSF) systems so those interested in the field can benchmark their goals. Four categories of HSF activity are described: human Exploration of solar system bodies; human Servicing of space-based assets; large-scale development of space Resources; and Breakout of self-sustaining human societies into the solar system. A progressive sequence of capabilities for each category starts with its earliest feasible missions and leads toward its full expression. The four sequences are compared in scale, distance from Earth, and readiness. Scenarios hybridize the most synergistic features from the four sequences for comparison to status quo, government-funded HSF program plans. Finally qualitative, decadal, order-of-magnitude estimates are derived for system development needs, and hence opportunities for space architects. Government investment towards human planetary exploration is the weakest generator of space architecture work. Conversely, the strongest generator is a combination of three market drivers: (1) commercial passenger travel in low Earth orbit; (2) in parallel, government extension of HSF capability to GEO; both followed by (3) scale-up demonstration of end-to-end solar power satellites in GEO. The rich end of this scale affords space architecture opportunities which are more diverse, complex, large-scale, and sociologically challenging than traditional exploration vehicle cabins and habitats.
Foundational perspectives on causality in large-scale brain networks
NASA Astrophysics Data System (ADS)
Mannino, Michael; Bressler, Steven L.
2015-12-01
A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.
Foundational perspectives on causality in large-scale brain networks.
Mannino, Michael; Bressler, Steven L
2015-12-01
A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Callens, Andy M.; Atchison, Timothy B.; Engler, Rachel R.
2009-01-01
Instructions for the Matrix Reasoning Test (MRT) of the Wechsler Adult Intelligence Scale-Third Edition were modified by explicitly stating that the subtest was untimed or that a per-item time limit would be imposed. The MRT was administered within one of four conditions: with (a) standard administration instructions, (b) explicit instructions…
Berendt, Bettina; Preibusch, Sören
2017-06-01
"Big Data" and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for discrimination-aware data mining and fairness-aware data mining aim at keeping decision processes supported by information technology free from unjust grounds. However, these formal approaches alone are not sufficient to solve the problem. In the present article, we describe reasons why discrimination with data can and typically does arise through the combined effects of human and machine-based reasoning, and argue that this requires a deeper understanding of the human side of decision-making with data mining. We describe results from a large-scale human-subjects experiment that investigated such decision-making, analyzing the reasoning that participants reported during their task to assess whether a loan request should or would be granted. We derive data protection by design strategies for making decision-making discrimination-aware in an accountable way, grounding these requirements in the accountability principle of the European Union General Data Protection Regulation, and outline how their implementations can integrate algorithmic, behavioral, and user interface factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.
Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely-sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board NASA's Terra satellite to scale up AmeriFlux NEE measurements to the continental scale. We first combined MODIS andmore » AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a regression tree approach. The predictive model was trained and validated using NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE reasonably well at the site level. We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day period in 2005 using spatially-explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets for large areas.« less
Why do Scale-Free Networks Emerge in Nature? From Gradient Networks to Transport Efficiency
NASA Astrophysics Data System (ADS)
Toroczkai, Zoltan
2004-03-01
It has recently been recognized [1,2,3] that a large number of complex networks are scale-free (having a power-law degree distribution). Examples include citation networks [4], the internet [5], the world-wide-web [6], cellular metabolic networks [7], protein interaction networks [8], the sex-web [9] and alliance networks in the U.S. biotechnology industry [10]. The existence of scale-free networks in such diverse systems suggests that there is a simple underlying common reason for their development. Here, we propose that scale-free networks emerge because they ensure efficient transport of some entity. We show that for flows generated by gradients of a scalar "potential'' distributed on a network, non scale-free networks, e.g., random graphs [11], will become maximally congested, while scale-free networks will ensure efficient transport in the large network size limit. [1] R. Albert and A.-L. Barabási, Rev.Mod.Phys. 74, 47 (2002). [2] M.E.J. Newman, SIAM Rev. 45, 167 (2003). [3] S.N. Dorogovtsev and J.F.F. Mendes, Evolution of Networks: From Biological Nets to the Internet and WWW, Oxford Univ. Press, Oxford, 2003. [4] S. Redner, Eur.Phys.J. B, 4, 131 (1998). [5] M. Faloutsos, P. Faloutsos and C. Faloutsos Comp.Comm.Rev. 29, 251 (1999). [6] R. Albert, H. Jeong, and A.L. Barabási, Nature 401, 130 (1999). [7] H. Jeong et.al. Nature 407, 651 (2000). [8] H. Jeong, S. Mason, A.-L. Barabási and Z. N. Oltvai, Nature 411, 41 (2001). [9] F. Liljeros et. al. Nature 411 907 (2000). [10] W. W. Powell, D. R. White, K. W. Koput and J. Owen-Smith Am.J.Soc. in press. [11] B. Bollobás, Random Graphs, Second Edition, Cambridge University Press (2001).
NASA Astrophysics Data System (ADS)
Smith, Trenton John
Pre-service secondary science individuals, future middle or high school instructors training to become teachers, along with both Honors and general first year undergraduate biology students were investigated to determine how they reason about and understand two core topics in Biology: matter and energy flow through biological systems and evolution by natural selection. Diagnostic Question Clusters were used to assess student understanding of the processes by which matter and energy flow through biological systems over spatial scales, from the atomic-molecular to ecosystem levels. Key concepts and identified misconceptions were examined over topics of evolution by natural selection using the multiple-choice Concept Inventory of Natural Selection (CINS) and open-response Assessing COntextual Reasoning about Natural Selection (ACORNS). Pre-service teachers used more scientifically based reasoning than the undergraduate students over the topics of matter and energy flow. The Honors students used more scientific and less improper informal reasoning than the general undergraduates over matter and energy flow. Honors students performed best on both the CINS and ACORNS items over natural selection, while the general undergraduates scored the lowest on the CINS, and the pre-service instructors scored lowest on the ACORNS. Overall, there remain a large proportion of students not consistently using scientific reasoning about these two important concepts, even in future secondary science teachers. My findings are similar to those of other published studies using the same assessments. In general, very few biology students at the college level use scientific reasoning that exhibits deep conceptual understanding. A reason for this could be that instructors fail to recognize deficiencies in student reasoning; they assume their students use principle-based reasoning. Another reason could be that principle-based reasoning is very difficult and our teaching approaches in college promote memorization of content rather than conceptual change. My findings are significant to the work and progression of concept inventories in biology education, as well as to the instructors of students at all levels of biology curriculum, and those of future science teachers.
Analysis of BF Hearth Reasonable Cooling System Based on the Water Dynamic Characteristics
NASA Astrophysics Data System (ADS)
Zuo, Haibin; Jiao, Kexin; Zhang, Jianliang; Li, Qian; Wang, Cui
A rational cooling water system is the assurance for long campaign life of blast furnace. In the paper, the heat transfer of different furnace period and different furnace condition based on the water quality characteristics were analysed, and the reason of the heat flux over the normal from the hydrodynamics was analysed. The results showed that, the vapour-film and scale existence significantly influenced the hearth heat transfer, which accelerated the brick lining erosion. The water dynamic characteristics of the parallel inner pipe or among the pipes were the main reason for the abnormal heat flux and film boiling. As to the reasonable cooling water flow, the gas film and the scale should be controlled and the energy saving should be considered.
NASA Astrophysics Data System (ADS)
Pillai, Prasanth A.; Aher, Vaishali R.
2018-01-01
Intraseasonal oscillation (ISO), which appears as "active" and "break" spells of rainfall, is an important component of Indian summer monsoon (ISM). The present study investigates the potential of new National Centre for Environmental Prediction (NCEP) climate forecast system version 2 (CFSv2) in simulating the ISO with emphasis to its interannual variability (IAV) and its possible role in the seasonal mean rainfall. The present analysis shows that the spatial distribution of CFSv2 rainfall has noticeable differences with observations in both ISO and IAV time scales. Active-break cycle of CFSv2 has similar evolution during both strong and weak years. Regardless of a reasonable El Niño Southern Oscillation (ENSO)-monsoon teleconnection in the model, the overestimated Arabian Sea (AS) sea surface temperature (SST)-convection relationship hinters the large-scale influence of ENSO over the ISM region and adjacent oceans. The ISO scale convections over AS and Bay of Bengal (BoB) have noteworthy contribution to the seasonal mean rainfall, opposing the influence of boundary forcing in these areas. At the same time, overwhelming contribution of ISO component over AS towards the seasonal mean modifies the effect of slow varying boundary forcing to large-scale summer monsoon. The results here underline that, along with the correct simulation of monsoon ISO, its IAV and relationship with the boundary forcing also need to be well captured in coupled models for the accurate simulation of seasonal mean anomalies of the monsoon and its teleconnections.
Long-wavelength microinstabilities in toroidal plasmas*
NASA Astrophysics Data System (ADS)
Tang, W. M.; Rewoldt, G.
1993-07-01
Realistic kinetic toroidal eigenmode calculations have been carried out to support a proper assessment of the influence of long-wavelength microturbulence on transport in tokamak plasmas. In order to efficiently evaluate large-scale kinetic behavior extending over many rational surfaces, significant improvements have been made to a toroidal finite element code used to analyze the fully two-dimensional (r,θ) mode structures of trapped-ion and toroidal ion temperature gradient (ITG) instabilities. It is found that even at very long wavelengths, these eigenmodes exhibit a strong ballooning character with the associated radial structure relatively insensitive to ion Landau damping at the rational surfaces. In contrast to the long-accepted picture that the radial extent of trapped-ion instabilities is characterized by the ion-gyroradius-scale associated with strong localization between adjacent rational surfaces, present results demonstrate that under realistic conditions, the actual scale is governed by the large-scale variations in the equilibrium gradients. Applications to recent measurements of fluctuation properties in Tokamak Fusion Test Reactor (TFTR) [Plasma Phys. Controlled Nucl. Fusion Res. (International Atomic Energy Agency, Vienna, 1985), Vol. 1, p. 29] L-mode plasmas indicate that the theoretical trends appear consistent with spectral characteristics as well as rough heuristic estimates of the transport level. Benchmarking calculations in support of the development of a three-dimensional toroidal gyrokinetic code indicate reasonable agreement with respect to both the properties of the eigenfunctions and the magnitude of the eigenvalues during the linear phase of the simulations of toroidal ITG instabilities.
Montagu, Dominic; Yamey, Gavin; Visconti, Adam; Harding, April; Yoong, Joanne
2011-01-01
Background In 2008, over 300,000 women died during pregnancy or childbirth, mostly in poor countries. While there are proven interventions to make childbirth safer, there is uncertainty about the best way to deliver these at large scale. In particular, there is currently a debate about whether maternal deaths are more likely to be prevented by delivering effective interventions through scaled up facilities or via community-based services. To inform this debate, we examined delivery location and attendance and the reasons women report for giving birth at home. Methodology/Principal Findings We conducted a secondary analysis of maternal delivery data from Demographic and Health Surveys in 48 developing countries from 2003 to the present. We stratified reported delivery locations by wealth quintile for each country and created weighted regional summaries. For sub-Saharan Africa (SSA), where death rates are highest, we conducted a subsample analysis of motivations for giving birth at home. In SSA, South Asia, and Southeast Asia, more than 70% of all births in the lowest two wealth quintiles occurred at home. In SSA, 54.1% of the richest women reported using public facilities compared with only 17.7% of the poorest women. Among home births in SSA, 56% in the poorest quintile were unattended while 41% were attended by a traditional birth attendant (TBA); 40% in the wealthiest quintile were unattended, while 33% were attended by a TBA. Seven per cent of the poorest women reported cost as a reason for not delivering in a facility, while 27% reported lack of access as a reason. The most common reason given by both the poorest and richest women for not delivering in a facility was that it was deemed “not necessary” by a household decision maker. Among the poorest women, “not necessary” was given as a reason by 68% of women whose births were unattended and by 66% of women whose births were attended. Conclusions In developing countries, most poor women deliver at home. This suggests that, at least in the near term, efforts to reduce maternal deaths should prioritize community-based interventions aimed at making home births safer. PMID:21386886
Montagu, Dominic; Yamey, Gavin; Visconti, Adam; Harding, April; Yoong, Joanne
2011-02-28
In 2008, over 300,000 women died during pregnancy or childbirth, mostly in poor countries. While there are proven interventions to make childbirth safer, there is uncertainty about the best way to deliver these at large scale. In particular, there is currently a debate about whether maternal deaths are more likely to be prevented by delivering effective interventions through scaled up facilities or via community-based services. To inform this debate, we examined delivery location and attendance and the reasons women report for giving birth at home. We conducted a secondary analysis of maternal delivery data from Demographic and Health Surveys in 48 developing countries from 2003 to the present. We stratified reported delivery locations by wealth quintile for each country and created weighted regional summaries. For sub-Saharan Africa (SSA), where death rates are highest, we conducted a subsample analysis of motivations for giving birth at home. In SSA, South Asia, and Southeast Asia, more than 70% of all births in the lowest two wealth quintiles occurred at home. In SSA, 54.1% of the richest women reported using public facilities compared with only 17.7% of the poorest women. Among home births in SSA, 56% in the poorest quintile were unattended while 41% were attended by a traditional birth attendant (TBA); 40% in the wealthiest quintile were unattended, while 33% were attended by a TBA. Seven per cent of the poorest women reported cost as a reason for not delivering in a facility, while 27% reported lack of access as a reason. The most common reason given by both the poorest and richest women for not delivering in a facility was that it was deemed "not necessary" by a household decision maker. Among the poorest women, "not necessary" was given as a reason by 68% of women whose births were unattended and by 66% of women whose births were attended. In developing countries, most poor women deliver at home. This suggests that, at least in the near term, efforts to reduce maternal deaths should prioritize community-based interventions aimed at making home births safer.
Improving STEM Student Learning Outcomes with GIS
NASA Astrophysics Data System (ADS)
Montgomery, W. W.
2013-12-01
Longitudinal data collection initiated a decade ago as part of a successful NSF-CCLI grant proposal has resulted in a large - and growing - sample (200+) of students who report on their perceptions of self-improvement in Technology, Critical Thinking, and Quantitative Reasoning proficiencies upon completion of an introductory (200-level) GIS course at New Jersey City University, a Hispanic-Serving and Minority Institution in Jersey City, NJ. Results from student satisfaction surveys indicate that, not surprisingly, 80% of respondents report improved confidence in Technology Literacy. Critical Thinking proficiency is judged to be significantly improved by 60% of respondents. On the other hand, Quantitative Reasoning proficiency confidence is improved in only 30% of students. This latter finding has prompted the instructor to search for more easily recognizable (to the student) ways of embedding quantitative reasoning into the course, as it is obvious to any GIS professional that there is an enormous amount of quantitative reasoning associated with this technology. A second post-course questionnaire asks students to rate themselves in these STEM proficiency areas using rubrics. Results mirror those from the self-satisfaction surveys. On a 5-point Likkert scale, students tend to see themselves improving about one letter grade on average in each proficiency area. The self-evaluation rubrics are reviewed by the instructor and are judged to be accurate for about 75% of the respondents.
Masnick, Max; Leekha, Surbhi
2015-07-01
We assessed frequency and predictors of seasonal influenza vaccination acceptance among inpatients at a large tertiary referral hospital, as well as reasons for vaccination refusal. Over 5 seasons, >60% of patients unvaccinated on admission refused influenza vaccination while hospitalized; "believes not at risk" was the reason most commonly given.
McCurdy, M; Bellows, A; Deng, D; Leppert, M; Mahone, E; Pritchard, A
2015-01-01
Reliable and valid screening and assessment tools are necessary to identify children at risk for neurodevelopmental disabilities who may require additional services. This study evaluated the test-retest reliability of the Capute Scales in a high-risk sample, hypothesizing adequate reliability across 6- and 12-month intervals. Capute Scales scores (N = 66) were collected via retrospective chart review from a NICU follow-up clinic within a large urban medical center spanning three age-ranges: 12-18, 19-24, and 25-36 months. On average, participants were classified as very low birth weight and premature. Reliability of the Capute Scales was evaluated with intraclass correlation coefficients across length of test-retest interval, age at testing, and degree of neonatal complications. The Capute Scales demonstrated high reliability, regardless of length of test-retest interval (ranging from 6 to 14 months) or age of participant, for all index scores, including overall Developmental Quotient (DQ), language-based skill index (CLAMS) and nonverbal reasoning index (CAT). Linear regressions revealed that greater neonatal risk was related to poorer test-retest reliability; however, reliability coefficients remained strong. The Capute Scales afford clinicians a reliable and valid means of screening and assessing for neurodevelopmental delay within high-risk infant populations.
Gender Differences in Reasons to Quit Smoking among Adolescents
ERIC Educational Resources Information Center
Struik, Laura L.; O'Loughlin, Erin K.; Dugas, Erika N.; Bottorff, Joan L.; O'Loughlin, Jennifer L.
2014-01-01
It is well established that many adolescents who smoke want to quit, but little is known about why adolescents want to quit and if reasons to quit differ across gender. The objective of this study was to determine if reasons to quit smoking differ in boys and girls. Data on the Adolescent Reasons for Quitting (ARFQ) scale were collected in mailed…
SCALES: SEVIRI and GERB CaL/VaL area for large-scale field experiments
NASA Astrophysics Data System (ADS)
Lopez-Baeza, Ernesto; Belda, Fernando; Bodas, Alejandro; Crommelynck, Dominique; Dewitte, Steven; Domenech, Carlos; Gimeno, Jaume F.; Harries, John E.; Jorge Sanchez, Joan; Pineda, Nicolau; Pino, David; Rius, Antonio; Saleh, Kauzar; Tarruella, Ramon; Velazquez, Almudena
2004-02-01
The main objective of the SCALES Project is to exploit the unique opportunity offered by the recent launch of the first European METEOSAT Second Generation geostationary satellite (MSG-1) to generate and validate new radiation budget and cloud products provided by the GERB (Geostationary Earth Radiation Budget) instrument. SCALES" specific objectives are: (i) definition and characterization of a large reasonably homogeneous area compatible to GERB pixel size (around 50 x 50 km2), (ii) validation of GERB TOA radiances and fluxes derived by means of angular distribution models, (iii) development of algorithms to estimate surface net radiation from GERB TOA measurements, and (iv) development of accurate methodologies to measure radiation flux divergence and analyze its influence on the thermal regime and dynamics of the atmosphere, also using GERB data. SCALES is highly innovative: it focuses on a new and unique space instrument and develops a new specific validation methodology for low resolution sensors that is based on the use of a robust reference meteorological station (Valencia Anchor Station) around which 3D high resolution meteorological fields are obtained from the MM5 Meteorological Model. During the 1st GERB Ground Validation Campaign (18th-24th June, 2003), CERES instruments on Aqua and Terra provided additional radiance measurements to support validation efforts. CERES instruments operated in the PAPS mode (Programmable Azimuth Plane Scanning) focusing the station. Ground measurements were taken by lidar, sun photometer, GPS precipitable water content, radiosounding ascents, Anchor Station operational meteorological measurements at 2m and 15m., 4 radiation components at 2m, and mobile stations to characterize a large area. In addition, measurements during LANDSAT overpasses on June 14th and 30th were also performed. These activities were carried out within the GIST (GERB International Science Team) framework, during GERB Commissioning Period.
NASA Astrophysics Data System (ADS)
Mccoll, K. A.; Van Heerwaarden, C.; Katul, G. G.; Gentine, P.; Entekhabi, D.
2016-12-01
While the break-down in similarity between turbulent transport of heat and momentum (or Reynolds analogy) is not disputed in the atmospheric surface layer (ASL) under unstably stratified conditions, the causes of this breakdown remain the subject of some debate. One reason for the break-down is hypothesized to be due to a change in the topology of the coherent structures and how they differently transport heat and momentum. As instability increases, coherent structures that are confined to the near-wall region transition to thermal plumes, spanning the entire boundary layer depth. Monin-Obukhov Similarity Theory (MOST), which hypothesizes that only local length scales play a role in ASL turbulent transport, implicitly assumes that thermal plumes and other large-scale structures are inactive (i.e., they do not contribute to turbulent transport despite their large energy content). Widely adopted mixing-length models for the ASL also rest on this assumption. The difficulty of characterizing low-wavenumber turbulent motions with field observations motivates the use of high-resolution Direct Numerical Simulations (DNS) that are free from sub-grid scale parameterizations and ad-hoc assumptions near the boundary. Despite the low Reynolds number, mild stratification and idealized geometry, DNS-estimated MOST functions are consistent with field experiments as are key low-frequency features of the vertical velocity variance and buoyancy spectra. Parsimonious spectral models for MOST stability correction functions for momentum (φm) and heat (φh) are derived based on idealized vertical velocity variance and buoyancy spectra fit to the corresponding DNS spectra. For φm, a spectral model requiring a local length scale (evolving with local stability conditions) that matches DNS and field observations is derived. In contrast, for φh, the aforementioned model is substantially biased unless contributions from larger length scales are also included. These results suggest that ASL heat transport cannot be precisely MO-similar, and that the breakdown of the Reynolds analogy is at least partially caused by the influence of large eddies on turbulent heat transport.
NASA Astrophysics Data System (ADS)
Duro, Javier; Iglesias, Rubén; Blanco, Pablo; Albiol, David; Koudogbo, Fifamè
2015-04-01
The Wide Area Product (WAP) is a new interferometric product developed to provide measurement over large regions. Persistent Scatterers Interferometry (PSI) has largely proved their robust and precise performance in measuring ground surface deformation in different application domains. In this context, however, the accurate displacement estimation over large-scale areas (more than 10.000 km2) characterized by low magnitude motion gradients (3-5 mm/year), such as the ones induced by inter-seismic or Earth tidal effects, still remains an open issue. The main reason for that is the inclusion of low quality and more distant persistent scatterers in order to bridge low-quality areas, such as water bodies, crop areas and forested regions. This fact yields to spatial propagation errors on PSI integration process, poor estimation and compensation of the Atmospheric Phase Screen (APS) and the difficult to face residual long-wavelength phase patterns originated by orbit state vectors inaccuracies. Research work for generating a Wide Area Product of ground motion in preparation for the Sentinel-1 mission has been conducted in the last stages of Terrafirma as well as in other research programs. These developments propose technological updates for keeping the precision over large scale PSI analysis. Some of the updates are based on the use of external information, like meteorological models, and the employment of GNSS data for an improved calibration of large measurements. Usually, covering wide regions implies the processing over areas with a land use which is chiefly focused on livestock, horticulture, urbanization and forest. This represents an important challenge for providing continuous InSAR measurements and the application of advanced phase filtering strategies to enhance the coherence. The advanced PSI processing has been performed out over several areas, allowing a large scale analysis of tectonic patterns, and motion caused by multi-hazards as volcanic, landslide and flood. Several examples of the application of the PSI WAP to wide regions for measuring ground displacements related to different types of hazards, natural and human induced will be presented. The InSAR processing approach to measure accurate movements at local and large scales for allowing multi-hazard interpretation studies will also be discussed. The test areas will show deformations related to active faults systems, landslides in mountains slopes, ground compaction over underneath aquifers and movements in volcanic areas.
ART-Ada: An Ada-based expert system tool
NASA Technical Reports Server (NTRS)
Lee, S. Daniel; Allen, Bradley P.
1990-01-01
The Department of Defense mandate to standardize on Ada as the language for software systems development has resulted in an increased interest in making expert systems technology readily available in Ada environments. NASA's Space Station Freedom is an example of the large Ada software development projects that will require expert systems in the 1990's. Another large scale application that can benefit from Ada based expert system tool technology is the Pilot's Associate (PA) expert system project for military combat aircraft. The Automated Reasoning Tool-Ada (ART-Ada), an Ada expert system tool, is explained. ART-Ada allows applications of a C-based expert system tool called ART-IM to be deployed in various Ada environments. ART-Ada is being used to implement several prototype expert systems for NASA's Space Station Freedom program and the U.S. Air Force.
ART-Ada: An Ada-based expert system tool
NASA Technical Reports Server (NTRS)
Lee, S. Daniel; Allen, Bradley P.
1991-01-01
The Department of Defense mandate to standardize on Ada as the language for software systems development has resulted in increased interest in making expert systems technology readily available in Ada environments. NASA's Space Station Freedom is an example of the large Ada software development projects that will require expert systems in the 1990's. Another large scale application that can benefit from Ada based expert system tool technology is the Pilot's Associate (PA) expert system project for military combat aircraft. Automated Reasoning Tool (ART) Ada, an Ada Expert system tool is described. ART-Ada allow applications of a C-based expert system tool called ART-IM to be deployed in various Ada environments. ART-Ada is being used to implement several prototype expert systems for NASA's Space Station Freedom Program and the U.S. Air Force.
NASA Astrophysics Data System (ADS)
de Jong, Maarten F.; Baptist, Martin J.; van Hal, Ralf; de Boois, Ingeborg J.; Lindeboom, Han J.; Hoekstra, Piet
2014-06-01
For the seaward harbour extension of the Port of Rotterdam in the Netherlands, approximately 220 million m3 sand was extracted between 2009 and 2013. In order to decrease the surface area of direct impact, the authorities permitted deep sand extraction, down to 20 m below the seabed. Biological and physical impacts of large-scale and deep sand extraction are still being investigated and largely unknown. For this reason, we investigated the colonization of demersal fish in a deep sand extraction site. Two sandbars were artificially created by selective dredging, copying naturally occurring meso-scale bedforms to increase habitat heterogeneity and increasing post-dredging benthic and demersal fish species richness and biomass. Significant differences in demersal fish species assemblages in the sand extraction site were associated with variables such as water depth, median grain size, fraction of very fine sand, biomass of white furrow shell (Abra alba) and time after the cessation of sand extraction. Large quantities of undigested crushed white furrow shell fragments were found in all stomachs and intestines of plaice (Pleuronectes platessa), indicating that it is an important prey item. One and two years after cessation, a significant 20-fold increase in demersal fish biomass was observed in deep parts of the extraction site. In the troughs of a landscaped sandbar however, a significant drop in biomass down to reference levels and a significant change in species assemblage was observed two years after cessation. The fish assemblage at the crests of the sandbars differed significantly from the troughs with tub gurnard (Chelidonichthys lucerna) being a Dufrêne-Legendre indicator species of the crests. This is a first indication of the applicability of landscaping techniques to induce heterogeneity of the seabed although it remains difficult to draw a strong conclusion due the lack of replication in the experiment. A new ecological equilibrium is not reached after 2 years since biotic and abiotic variables are still adapting. To understand the final impact of deep and large-scale sand extraction on demersal fish, we recommend monitoring for a longer period, at least for a period of six years or even longer.
NASA Space Flight Vehicle Fault Isolation Challenges
NASA Technical Reports Server (NTRS)
Bramon, Christopher; Inman, Sharon K.; Neeley, James R.; Jones, James V.; Tuttle, Loraine
2016-01-01
The Space Launch System (SLS) is the new NASA heavy lift launch vehicle and is scheduled for its first mission in 2017. The goal of the first mission, which will be uncrewed, is to demonstrate the integrated system performance of the SLS rocket and spacecraft before a crewed flight in 2021. SLS has many of the same logistics challenges as any other large scale program. Common logistics concerns for SLS include integration of discrete programs geographically separated, multiple prime contractors with distinct and different goals, schedule pressures and funding constraints. However, SLS also faces unique challenges. The new program is a confluence of new hardware and heritage, with heritage hardware constituting seventy-five percent of the program. This unique approach to design makes logistics concerns such as testability of the integrated flight vehicle especially problematic. The cost of fully automated diagnostics can be completely justified for a large fleet, but not so for a single flight vehicle. Fault detection is mandatory to assure the vehicle is capable of a safe launch, but fault isolation is another issue. SLS has considered various methods for fault isolation which can provide a reasonable balance between adequacy, timeliness and cost. This paper will address the analyses and decisions the NASA Logistics engineers are making to mitigate risk while providing a reasonable testability solution for fault isolation.
Jimena: efficient computing and system state identification for genetic regulatory networks.
Karl, Stefan; Dandekar, Thomas
2013-10-11
Boolean networks capture switching behavior of many naturally occurring regulatory networks. For semi-quantitative modeling, interpolation between ON and OFF states is necessary. The high degree polynomial interpolation of Boolean genetic regulatory networks (GRNs) in cellular processes such as apoptosis or proliferation allows for the modeling of a wider range of node interactions than continuous activator-inhibitor models, but suffers from scaling problems for networks which contain nodes with more than ~10 inputs. Many GRNs from literature or new gene expression experiments exceed those limitations and a new approach was developed. (i) As a part of our new GRN simulation framework Jimena we introduce and setup Boolean-tree-based data structures; (ii) corresponding algorithms greatly expedite the calculation of the polynomial interpolation in almost all cases, thereby expanding the range of networks which can be simulated by this model in reasonable time. (iii) Stable states for discrete models are efficiently counted and identified using binary decision diagrams. As application example, we show how system states can now be sampled efficiently in small up to large scale hormone disease networks (Arabidopsis thaliana development and immunity, pathogen Pseudomonas syringae and modulation by cytokinins and plant hormones). Jimena simulates currently available GRNs about 10-100 times faster than the previous implementation of the polynomial interpolation model and even greater gains are achieved for large scale-free networks. This speed-up also facilitates a much more thorough sampling of continuous state spaces which may lead to the identification of new stable states. Mutants of large networks can be constructed and analyzed very quickly enabling new insights into network robustness and behavior.
NASA Astrophysics Data System (ADS)
Liu, B.; Chen, X.; Li, Y.; Chen, Z.
2017-12-01
bstract: Potential evapotranspiration (PET) is a sensitive factor for atmospheric and ecological systems over Southwest China which is characterized by intensive karst geomorphology and fragile environment. Based on daily meteorological data of 94 stations during 1961-2013, the spatiotemporal characteristics of PET are analyzed. The changing characteristics of local meteorological factors and large-scale climatic features are also investigated to explain the potential reasons for changing PET. Study results are as follows: (1) The high-value center of PET with a mean value of 1097 mm/a locates in the south mainly resulted from the regional climatic features of higher air temperature (TEM), sunshine duration (SSD) and lower relative humidity (RHU); and the low-value center of PET with a mean value of 831 mm/a is in the northeast primarily attributed to higher RHU and weaker SSD. (2) Annual PET decreases at -10.04 mm decade-1 before the year 2000 but increases at 50.65 mm decade-1 thereafter; and the dominant factors of PET change are SSD, RHU and wind speed (WIN), with the relative contributions of 33.29%, 25.42% and 22.16%, respectively. (3) The abrupt change of PET in 2000 is strongly dominated by large-scale climatic anomalies. The strengthened 850hPa geostrophic wind (0.51 ms-1 decade-1), weakened total cloud cover (-2.25 % decade-1) and 500hPa water vapor flux (-2.85 % decade-1) have provided advantageous dynamic, thermal and dry conditions for PET over Southwest China since the 21st century.
Modeling evolution of dark matter substructure and annihilation boost
NASA Astrophysics Data System (ADS)
Hiroshima, Nagisa; Ando, Shin'ichiro; Ishiyama, Tomoaki
2018-06-01
We study evolution of dark matter substructures, especially how they lose mass and change density profile after they fall in gravitational potential of larger host halos. We develop an analytical prescription that models the subhalo mass evolution and calibrate it to results of N -body numerical simulations of various scales from very small (Earth size) to large (galaxies to clusters) halos. We then combine the results with halo accretion histories and calculate the subhalo mass function that is physically motivated down to Earth-mass scales. Our results—valid for arbitrary host masses and redshifts—have reasonable agreement with those of numerical simulations at resolved scales. Our analytical model also enables self-consistent calculations of the boost factor of dark matter annihilation, which we find to increase from tens of percent at the smallest (Earth) and intermediate (dwarfs) masses to a factor of several at galaxy size, and to become as large as a factor of ˜10 for the largest halos (clusters) at small redshifts. Our analytical approach can accommodate substructures in the subhalos (sub-subhalos) in a consistent framework, which we find to give up to a factor of a few enhancements to the annihilation boost. The presence of the subhalos enhances the intensity of the isotropic gamma-ray background by a factor of a few, and as the result, the measurement by the Fermi Large Area Telescope excludes the annihilation cross section greater than ˜4 ×10-26 cm3 s-1 for dark matter masses up to ˜200 GeV .
1964-10-29
Originally the Rendezvous was used by the astronauts preparing for Gemini missions. The Rendezvous Docking Simulator was then modified and used to develop docking techniques for the Apollo program. "The LEM pilot's compartment, with overhead window and the docking ring (idealized since the pilot cannot see it during the maneuvers), is shown docked with the full-scale Apollo Command Module." A.W. Vogeley described the simulator as follows: "The Rendezvous Docking Simulator and also the Lunar Landing Research Facility are both rather large moving-base simulators. It should be noted, however, that neither was built primarily because of its motion characteristics. The main reason they were built was to provide a realistic visual scene. A secondary reason was that they would provide correct angular motion cues (important in control of vehicle short-period motions) even though the linear acceleration cues would be incorrect." -- Published in A.W. Vogeley, "Piloted Space-Flight Simulation at Langley Research Center," Paper presented at the American Society of Mechanical Engineers, 1966 Winter Meeting, New York, NY, November 27 - December 1, 1966;
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vögele, Martin; Department of Theoretical Biophysics, Max Planck Institute of Biophysics, Frankfurt a. M.; Holm, Christian
2015-12-28
We present simulations of aqueous polyelectrolyte complexes with new MARTINI models for the charged polymers poly(styrene sulfonate) and poly(diallyldimethylammonium). Our coarse-grained polyelectrolyte models allow us to study large length and long time scales with regard to chemical details and thermodynamic properties. The results are compared to the outcomes of previous atomistic molecular dynamics simulations and verify that electrostatic properties are reproduced by our MARTINI coarse-grained approach with reasonable accuracy. Structural similarity between the atomistic and the coarse-grained results is indicated by a comparison between the pair radial distribution functions and the cumulative number of surrounding particles. Our coarse-grained models aremore » able to quantitatively reproduce previous findings like the correct charge compensation mechanism and a reduced dielectric constant of water. These results can be interpreted as the underlying reason for the stability of polyelectrolyte multilayers and complexes and validate the robustness of the proposed models.« less
Soviet military strategy towards 2010. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConnell, J.M.
1989-11-01
This paper tries to identify significant current trends that may continue into the 21st century and shape Soviet military strategy. An arms control trend, stemming from the Soviet concept of reasonable sufficiency, seems slated to handicap the USSR severely in options for fighting and winning large-scale conventional and theater-nuclear wars. Moscow evidently feels the strategic nuclear sphere will be the key arena of military competition in the future. First, the USSR now shows a greater commitment to offensive counterforce than was true of the period before reasonable sufficiency. Second, Moscow's interest in the strategic nuclear sphere will be reinforced bymore » a long-term trend toward space warfare. However, it may be possible to soften the competition in this sphere through arms control. Prominent Soviets have already begun to suggest that, if the U.S. will limit its SDI ambitions to a thin defense, Moscow might actually prefer mutual comprehensive ABM deployments to continued adherence to the 1972 ABM Treaty.« less
Ontology-Based High-Level Context Inference for Human Behavior Identification
Villalonga, Claudia; Razzaq, Muhammad Asif; Khan, Wajahat Ali; Pomares, Hector; Rojas, Ignacio; Lee, Sungyoung; Banos, Oresti
2016-01-01
Recent years have witnessed a huge progress in the automatic identification of individual primitives of human behavior, such as activities or locations. However, the complex nature of human behavior demands more abstract contextual information for its analysis. This work presents an ontology-based method that combines low-level primitives of behavior, namely activity, locations and emotions, unprecedented to date, to intelligently derive more meaningful high-level context information. The paper contributes with a new open ontology describing both low-level and high-level context information, as well as their relationships. Furthermore, a framework building on the developed ontology and reasoning models is presented and evaluated. The proposed method proves to be robust while identifying high-level contexts even in the event of erroneously-detected low-level contexts. Despite reasonable inference times being obtained for a relevant set of users and instances, additional work is required to scale to long-term scenarios with a large number of users. PMID:27690050
Straight Talk: The Challenge Before Modern Day Hinduism
Singh, Ajai R.
2009-01-01
Hinduism, as an institution, offers very little to the poor and underprivileged within its fold. This is one of the prime reasons for voluntary conversion of Hindus from among its members. B.R. Ambedkar and A.R. Rahman provide poignant examples of how lack of education and health facilities for the underprivileged within its fold, respectively, led to their conversion. This can be countered by a movement to provide large-scale quality health [hospitals/PHCs] and educational [schools/colleges] facilities run by Hindu mission organisations spread over the cities and districts of India. A four point-four phase programmme is presented here to outline how this can be achieved. Those who have the genuine interests of Hinduism at heart will have to set such an agenda before them rather than strident and violent affirmations of its glories. One can understand the reasons for such stridency, but it is time it got converted into constructive affirmative action to keep the flock. PMID:21836787
Straight talk: the challenge before modern day hinduism.
Singh, Ajai R
2009-01-01
Hinduism, as an institution, offers very little to the poor and underprivileged within its fold. This is one of the prime reasons for voluntary conversion of Hindus from among its members. B.R. Ambedkar and A.R. Rahman provide poignant examples of how lack of education and health facilities for the underprivileged within its fold, respectively, led to their conversion. This can be countered by a movement to provide large-scale quality health [hospitals/PHCs] and educational [schools/colleges] facilities run by Hindu mission organisations spread over the cities and districts of India. A four point-four phase programmme is presented here to outline how this can be achieved. Those who have the genuine interests of Hinduism at heart will have to set such an agenda before them rather than strident and violent affirmations of its glories. One can understand the reasons for such stridency, but it is time it got converted into constructive affirmative action to keep the flock.
Magnetic storm generation by large-scale complex structure Sheath/ICME
NASA Astrophysics Data System (ADS)
Grigorenko, E. E.; Yermolaev, Y. I.; Lodkina, I. G.; Yermolaev, M. Y.; Riazantseva, M.; Borodkova, N. L.
2017-12-01
We study temporal profiles of interplanetary plasma and magnetic field parameters as well as magnetospheric indices. We use our catalog of large-scale solar wind phenomena for 1976-2000 interval (see the catalog for 1976-2016 in web-side ftp://ftp.iki.rssi.ru/pub/omni/ prepared on basis of OMNI database (Yermolaev et al., 2009)) and the double superposed epoch analysis method (Yermolaev et al., 2010). Our analysis showed (Yermolaev et al., 2015) that average profiles of Dst and Dst* indices decrease in Sheath interval (magnetic storm activity increases) and increase in ICME interval. This profile coincides with inverted distribution of storm numbers in both intervals (Yermolaev et al., 2017). This behavior is explained by following reasons. (1) IMF magnitude in Sheath is higher than in Ejecta and closed to value in MC. (2) Sheath has 1.5 higher efficiency of storm generation than ICME (Nikolaeva et al., 2015). The most part of so-called CME-induced storms are really Sheath-induced storms and this fact should be taken into account during Space Weather prediction. The work was in part supported by the Russian Science Foundation, grant 16-12-10062. References. 1. Nikolaeva N.S., Y. I. Yermolaev and I. G. Lodkina (2015), Modeling of the corrected Dst* index temporal profile on the main phase of the magnetic storms generated by different types of solar wind, Cosmic Res., 53(2), 119-127 2. Yermolaev Yu. I., N. S. Nikolaeva, I. G. Lodkina and M. Yu. Yermolaev (2009), Catalog of Large-Scale Solar Wind Phenomena during 1976-2000, Cosmic Res., , 47(2), 81-94 3. Yermolaev, Y. I., N. S. Nikolaeva, I. G. Lodkina, and M. Y. Yermolaev (2010), Specific interplanetary conditions for CIR-induced, Sheath-induced, and ICME-induced geomagnetic storms obtained by double superposed epoch analysis, Ann. Geophys., 28, 2177-2186 4. Yermolaev Yu. I., I. G. Lodkina, N. S. Nikolaeva and M. Yu. Yermolaev (2015), Dynamics of large-scale solar wind streams obtained by the double superposed epoch analysis, J. Geophys. Res. Space Physics, 120, doi:10.1002/2015JA021274 5. Yermolaev Y. I., I. G. Lodkina, N. S. Nikolaeva, M. Y. Yermolaev, M. O. Riazantseva (2017), Some Problems of Identification of Large-Scale Solar Wind types and Their Role in the Physics of the Magnetosphere, Cosmic Res., 55(3), pp. 178-189. DOI: 10.1134/S0010952517030029
Sucurovic, Snezana; Milutinovic, Veljko
2008-01-01
The Internet based distributed large scale information systems implements attribute based access control (ABAC) rather than Role Based Access Control (RBAC). The reason is that the Internet is identity less and that ABAC scales better. EXtensible Access Control Markup Language is standardized language for writing access control policies, access control requests and access control responses in ABAC. XACML can provide decentralized administration and credentials distribution. In year 2002 version of CEN ENV 13 606 attributes have been attached to EHCR components and in such a system ABAC and XACML have been easy to implement. This paper presents writing XACML policies in the case when attributes are in hierarchical structure. It is presented two possible solutions to write XACML policy in that case and that the solution when set functions are used is more compact and provides 10% better performances.
Double-Diffusive Convection at Low Prandtl Number
NASA Astrophysics Data System (ADS)
Garaud, Pascale
2018-01-01
This work reviews present knowledge of double-diffusive convection at low Prandtl number obtained using direct numerical simulations, in both the fingering regime and the oscillatory regime. Particular emphasis is given to modeling the induced turbulent mixing and its impact in various astrophysical applications. The nonlinear saturation of fingering convection at low Prandtl number usually drives small-scale turbulent motions whose transport properties can be predicted reasonably accurately using a simple semi-analytical model. In some instances, large-scale internal gravity waves can be excited by a collective instability and eventually cause layering. The nonlinear saturation of oscillatory double-diffusive convection exhibits much more complex behavior. Weakly stratified systems always spontaneously transition into layered convection associated with very efficient mixing. More strongly stratified systems remain dominated by weak wave turbulence unless they are initialized into a layered state. The effects of rotation, shear, lateral gradients, and magnetic fields are briefly discussed.
Nano-materials enabled thermoelectricity from window glasses.
Inayat, Salman B; Rader, Kelly R; Hussain, Muhammad M
2012-01-01
With a projection of nearly doubling up the world population by 2050, we need wide variety of renewable and clean energy sources to meet the increased energy demand. Solar energy is considered as the leading promising alternate energy source with the pertinent challenge of off sunshine period and uneven worldwide distribution of usable sun light. Although thermoelectricity is considered as a reasonable renewable energy from wasted heat, its mass scale usage is yet to be developed. Here we show, large scale integration of nano-manufactured pellets of thermoelectric nano-materials, embedded into window glasses to generate thermoelectricity using the temperature difference between hot outside and cool inside. For the first time, this work offers an opportunity to potentially generate 304 watts of usable power from 9 m(2) window at a 20°C temperature gradient. If a natural temperature gradient exists, this can serve as a sustainable energy source for green building technology.
Electrical innovations, authority and consulting expertise in late Victorian Britain
Arapostathis, Stathis
2013-01-01
In this article I examine the practices of electrical engineering experts, with special reference to their role in the implementation of innovations in late Victorian electrical networks. I focus on the consulting work of two leading figures in the scientific and engineering world of the period, Alexander Kennedy and William Preece. Both were Fellows of the Royal Society and both developed large-scale consulting activities in the emerging electrical industry of light and power. At the core of the study I place the issues of trust and authority, and the bearing of these on the engineering expertise of consultants in late Victorian Britain. I argue that the ascription of expertise to these engineers and the trust placed in their advice were products of power relations on the local scale. The study seeks to unravel both the technical and the social reasons for authoritative patterns of consulting expertise. PMID:24686584
Piwoz, Ellen G; Huffman, Sandra L; Quinn, Victoria J
2003-03-01
Although many successes have been achieved in promoting breastfeeding, this has not been the case for complementary feeding. Some successes in promoting complementary feeding at the community level have been documented, but few of these efforts have expanded to a larger scale and become sustained. To discover the reasons for this difference, the key factors for the successful promotion of breastfeeding on a large scale were examined and compared with the efforts made in complementary feeding. These factors include definition and rationale, policy support, funding, advocacy, private-sector involvement, availability and use of monitoring data, integration of research into action, and the existence of a well-articulated series of steps for successful implementation. The lessons learned from the promotion of breastfeeding should be applied to complementary feeding, and the new Global Strategy for Infant and Young Child Feeding provides an excellent first step in this process.
Dark matter (energy) may be indistinguishable from modified gravity (MOND)
NASA Astrophysics Data System (ADS)
Sivaram, C.
For Newtonian dynamics to hold over galactic scales, large amounts of dark matter (DM) are required which would dominate cosmic structures. Accounting for the strong observational evidence that the universe is accelerating requires the presence of an unknown dark energy (DE) component constituting about 70% of the matter. Several ingenious ongoing experiments to detect the DM particles have so far led to negative results. Moreover, the comparable proportions of the DM and DE at the present epoch appear unnatural and not predicted by any theory. For these reasons, alternative ideas like MOND and modification of gravity or general relativity over cosmic scales have been proposed. It is shown in this paper that these alternate ideas may not be easily distinguishable from the usual DM or DE hypotheses. Specific examples are given to illustrate this point that the modified theories are special cases of a generalized DM paradigm.
This meeting: A biased observer's view
NASA Astrophysics Data System (ADS)
Heiles, Carl
1992-06-01
Letting yourself be nominated for a conference summary talk is considered by some to be a big mistake. It eliminates the possibility of making up the sleep lost at night, while partying, during the day, while sitting in the talks. It even forces you to look at all the poster papers. But at a meeting like this, with the wealth of observational data, it is definitely not a mistake: it was even worth missing some of the parties! My problem was to devise a way to be sufficiently selective so as to provide a reasonably coherent summary. I chose to emphasize the multitude of large-scale maps presented at the meeting. Many are relevant to the ``worm paradigm'' (Sec. 2), and the recent γ-ray and ROSAT results are relevant to the Hot Ionized Medium (Sec. 3). And finally, I was impressed by a number of well-crafted smaller-scale observations, which elucidate particular aspects of the interstellar medium (Sec. 4).
This meeting: A biased observer's view
NASA Astrophysics Data System (ADS)
Heiles, Carl
Letting yourself be nominated for a conference summary talk is considered by some to be a big mistake. It eliminates the possibility of making up the sleep lost at night, while partying, during the day, while sitting in the talks. It even forces you to look at all the poster papers. But at a meeting like this, with the wealth of observational data, it is definitely not a mistake: it was even worth missing some of the parties! My problem was to devise a way to be sufficiently selective so as to provide a reasonably coherent summary. I chose to emphasize the multitude of large-scale maps presented at the meeting. Many are relevant to the ``worm paradigm'' (Sec. 2), and the recent γ-ray and ROSAT results are relevant to the Hot Ionized Medium (Sec. 3). And finally, I was impressed by a number of well-crafted smaller-scale observations, which elucidate particular aspects of the interstellar medium (Sec. 4).
Scientific Services on the Cloud
NASA Astrophysics Data System (ADS)
Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong
Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.
Ion-exchange chromatography separation applied to mineral recycle in closed systems
NASA Technical Reports Server (NTRS)
Ballou, E.; Spitze, L. A.; Wong, F. W.; Wydeven, T.; Johnson, C. C.
1981-01-01
As part of the controlled ecological life support system (CELSS) program, a study is being made of mineral separation on ion-exchange columns. The purpose of the mineral separation step is to allow minerals to be recycled from the oxidized waste products of plants, man, and animals for hydroponic food production. In the CELSS application, relatively large quantities of minerals in a broad concentration range must be recovered by the desired system, rather than the trace quantities and very low concentrations treated in analytical applications of ion-exchange chromatography. Experiments have been carried out to assess the parameters pertinent to the scale-up of ion-exchange chromatography and to determine feasibility. Preliminary conclusions are that the column scale-up is in a reasonable size range for the CELSS application. The recycling of a suitable eluent, however, remains a major challenge to the suitability of using ion exchange chromatography in closed systems.
Hybrid LES RANS technique based on a one-equation near-wall model
NASA Astrophysics Data System (ADS)
Breuer, M.; Jaffrézic, B.; Arora, K.
2008-05-01
In order to reduce the high computational effort of wall-resolved large-eddy simulations (LES), the present paper suggests a hybrid LES RANS approach which splits up the simulation into a near-wall RANS part and an outer LES part. Generally, RANS is adequate for attached boundary layers requiring reasonable CPU-time and memory, where LES can also be applied but demands extremely large resources. Contrarily, RANS often fails in flows with massive separation or large-scale vortical structures. Here, LES is without a doubt the best choice. The basic concept of hybrid methods is to combine the advantages of both approaches yielding a prediction method, which, on the one hand, assures reliable results for complex turbulent flows, including large-scale flow phenomena and massive separation, but, on the other hand, consumes much fewer resources than LES, especially for high Reynolds number flows encountered in technical applications. In the present study, a non-zonal hybrid technique is considered (according to the signification retained by the authors concerning the terms zonal and non-zonal), which leads to an approach where the suitable simulation technique is chosen more or less automatically. For this purpose the hybrid approach proposed relies on a unique modeling concept. In the LES mode a subgrid-scale model based on a one-equation model for the subgrid-scale turbulent kinetic energy is applied, where the length scale is defined by the filter width. For the viscosity-affected near-wall RANS mode the one-equation model proposed by Rodi et al. (J Fluids Eng 115:196 205, 1993) is used, which is based on the wall-normal velocity fluctuations as the velocity scale and algebraic relations for the length scales. Although the idea of combined LES RANS methods is not new, a variety of open questions still has to be answered. This includes, in particular, the demand for appropriate coupling techniques between LES and RANS, adaptive control mechanisms, and proper subgrid-scale and RANS models. Here, in addition to the study on the behavior of the suggested hybrid LES RANS approach, special emphasis is put on the investigation of suitable interface criteria and the adjustment of the RANS model. To investigate these issues, two different test cases are considered. Besides the standard plane channel flow test case, the flow over a periodic arrangement of hills is studied in detail. This test case includes a pressure-induced flow separation and subsequent reattachment. In comparison with a wall-resolved LES prediction encouraging results are achieved.
NASA Astrophysics Data System (ADS)
Stork, David G.; Furuichi, Yasuo
2011-03-01
David Hockney has argued that the right hand of the disciple, thrust to the rear in Caravaggio's Supper at Emmaus (1606), is anomalously large as a result of the artist refocusing a putative secret lens-based optical projector and tracing the image it projected onto his canvas. We show through rigorous optical analysis that to achieve such an anomalously large hand image, Caravaggio would have needed to make extremely large, conspicuous and implausible alterations to his studio setup, moving both his purported lens and his canvas nearly two meters between "exposing" the disciple's left hand and then his right hand. Such major disruptions to his studio would have impeded -not aided- Caravaggio in his work. Our optical analysis quantifies these problems and our computer graphics reconstruction of Caravaggio's studio illustrates these problems. In this way we conclude that Caravaggio did not use optical projections in the way claimed by Hockney, but instead most likely set the sizes of these hands "by eye" for artistic reasons.
Children's use of geometry for reorientation.
Lee, Sang Ah; Spelke, Elizabeth S
2008-09-01
Research on navigation has shown that humans and laboratory animals recover their sense of orientation primarily by detecting geometric properties of large-scale surface layouts (e.g. room shape), but the reasons for the primacy of layout geometry have not been clarified. In four experiments, we tested whether 4-year-old children reorient by the geometry of extended wall-like surfaces because such surfaces are large and perceived as stable, because they serve as barriers to vision or to locomotion, or because they form a single, connected geometric figure. Disoriented children successfully reoriented by the shape of an arena formed by surfaces that were short enough to see and step over. In contrast, children failed to reorient by the shape of an arena defined by large and stable columns or by connected lines on the floor. We conclude that preschool children's reorientation is not guided by the functional relevance of the immediate environmental properties, but rather by a specific sensitivity to the geometric properties of the extended three-dimensional surface layout.
NASA Astrophysics Data System (ADS)
Wegner, M.; Karcher, N.; Krömer, O.; Richter, D.; Ahrens, F.; Sander, O.; Kempf, S.; Weber, M.; Enss, C.
2018-02-01
To our present best knowledge, microwave SQUID multiplexing (μ MUXing) is the most suitable technique for reading out large-scale low-temperature microcalorimeter arrays that consist of hundreds or thousands of individual pixels which require a large readout bandwidth per pixel. For this reason, the present readout strategy for metallic magnetic calorimeter (MMC) arrays combining an intrinsic fast signal rise time, an excellent energy resolution, a large energy dynamic range, a quantum efficiency close to 100% as well as a highly linear detector response is based on μ MUXing. Within this paper, we summarize the state of the art in MMC μ MUXing and discuss the most recent results. This particularly includes the discussion of the performance of a 64-pixel detector array with integrated, on-chip microwave SQUID multiplexer, the progress in flux ramp modulation of MMCs as well as the status of the development of a software-defined radio-based room-temperature electronics which is specifically optimized for MMC readout.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jo, Na Hyun; Wu, Yun; Wang, Lin-Lin
The recently discovered material PtSn 4 is known to exhibit extremely large magnetoresistance (XMR) that also manifests Dirac arc nodes on the surface. PdSn 4 is isostructural to PtSn 4 with the same electron count. Here, we report on the physical properties of high-quality single crystals of PdSn 4 including specific heat, temperature- and magnetic-field-dependent resistivity and magnetization, and electronic band-structure properties obtained from angle-resolved photoemission spectroscopy (ARPES). We observe that PdSn 4 has physical properties that are qualitatively similar to those of PtSn 4 , but find also pronounced differences. Importantly, the Dirac arc node surface state of PtSnmore » 4 is gapped out for PdSn 4. By comparing these similar compounds, we address the origin of the extremely large magnetoresistance in PdSn 4 and PtSn 4; based on detailed analysis of the magnetoresistivity ρ ( H , T ) , we conclude that neither the carrier compensation nor the Dirac arc node surface state are the primary reason for the extremely large magnetoresistance. On the other hand, we also find that, surprisingly, Kohler's rule scaling of the magnetoresistance, which describes a self-similarity of the field-induced orbital electronic motion across different length scales and is derived for a simple electronic response of metals to an applied magnetic field is obeyed over the full range of temperatures and field strengths that we explore.« less
Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data.
Li, Wenyuan; Gong, Ke; Li, Qingjiao; Alber, Frank; Zhou, Xianghong Jasmine
2015-03-15
Genome-wide proximity ligation assays, e.g. Hi-C and its variant TCC, have recently become important tools to study spatial genome organization. Removing biases from chromatin contact matrices generated by such techniques is a critical preprocessing step of subsequent analyses. The continuing decline of sequencing costs has led to an ever-improving resolution of the Hi-C data, resulting in very large matrices of chromatin contacts. Such large-size matrices, however, pose a great challenge on the memory usage and speed of its normalization. Therefore, there is an urgent need for fast and memory-efficient methods for normalization of Hi-C data. We developed Hi-Corrector, an easy-to-use, open source implementation of the Hi-C data normalization algorithm. Its salient features are (i) scalability-the software is capable of normalizing Hi-C data of any size in reasonable times; (ii) memory efficiency-the sequential version can run on any single computer with very limited memory, no matter how little; (iii) fast speed-the parallel version can run very fast on multiple computing nodes with limited local memory. The sequential version is implemented in ANSI C and can be easily compiled on any system; the parallel version is implemented in ANSI C with the MPI library (a standardized and portable parallel environment designed for solving large-scale scientific problems). The package is freely available at http://zhoulab.usc.edu/Hi-Corrector/. © The Author 2014. Published by Oxford University Press.
Jo, Na Hyun; Wu, Yun; Wang, Lin-Lin; ...
2017-10-27
The recently discovered material PtSn 4 is known to exhibit extremely large magnetoresistance (XMR) that also manifests Dirac arc nodes on the surface. PdSn 4 is isostructural to PtSn 4 with the same electron count. Here, we report on the physical properties of high-quality single crystals of PdSn 4 including specific heat, temperature- and magnetic-field-dependent resistivity and magnetization, and electronic band-structure properties obtained from angle-resolved photoemission spectroscopy (ARPES). We observe that PdSn 4 has physical properties that are qualitatively similar to those of PtSn 4 , but find also pronounced differences. Importantly, the Dirac arc node surface state of PtSnmore » 4 is gapped out for PdSn 4. By comparing these similar compounds, we address the origin of the extremely large magnetoresistance in PdSn 4 and PtSn 4; based on detailed analysis of the magnetoresistivity ρ ( H , T ) , we conclude that neither the carrier compensation nor the Dirac arc node surface state are the primary reason for the extremely large magnetoresistance. On the other hand, we also find that, surprisingly, Kohler's rule scaling of the magnetoresistance, which describes a self-similarity of the field-induced orbital electronic motion across different length scales and is derived for a simple electronic response of metals to an applied magnetic field is obeyed over the full range of temperatures and field strengths that we explore.« less
NASA Astrophysics Data System (ADS)
Johnson, Marcus; Jung, Youngsun; Dawson, Daniel; Supinie, Timothy; Xue, Ming; Park, Jongsook; Lee, Yong-Hee
2018-07-01
The UK Met Office Unified Model (UM) is employed by many weather forecasting agencies around the globe. This model is designed to run across spatial and time scales and known to produce skillful predictions for large-scale weather systems. However, the model has only recently begun running operationally at horizontal grid spacings of ˜1.5 km [e.g., at the UK Met Office and the Korea Meteorological Administration (KMA)]. As its microphysics scheme was originally designed and tuned for large-scale precipitation systems, we investigate the performance of UM microphysics to determine potential inherent biases or weaknesses. Two rainfall cases from the KMA forecasting system are considered in this study: a Changma (quasi-stationary) front, and Typhoon Sanba (2012). The UM output is compared to polarimetric radar observations in terms of simulated polarimetric radar variables. Results show that the UM generally underpredicts median reflectivity in stratiform rain, producing high reflectivity cores and precipitation gaps between them. This is partially due to the diagnostic rain intercept parameter formulation used in the one-moment microphysics scheme. Model drop size is generally both underand overpredicted compared to observations. UM frozen hydrometeors favor generic ice (crystals and snow) rather than graupel, which is reasonable for Changma and typhoon cases. The model performed best with the typhoon case in terms of simulated precipitation coverage.
Kelley, Mary E.; Anderson, Stewart J.
2008-01-01
Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711
Soil-geographical regionalization as a basis for digital soil mapping: Karelia case study
NASA Astrophysics Data System (ADS)
Krasilnikov, P.; Sidorova, V.; Dubrovina, I.
2010-12-01
Recent development of digital soil mapping (DSM) allowed improving significantly the quality of soil maps. We tried to make a set of empirical models for the territory of Karelia, a republic at the North-East of the European territory of Russian Federation. This territory was selected for the pilot study for DSM for two reasons. First, the soils of the region are mainly monogenetic; thus, the effect of paleogeographic environment on recent soils is reduced. Second, the territory was poorly mapped because of low agricultural development: only 1.8% of the total area of the republic is used for agriculture and has large-scale soil maps. The rest of the territory has only small-scale soil maps, compiled basing on the general geographic concepts rather than on field surveys. Thus, the only solution for soil inventory was the predictive digital mapping. The absence of large-scaled soil maps did not allow data mining from previous soil surveys, and only empirical models could be applied. For regionalization purposes, we accepted the division into Northern and Southern Karelia, proposed in the general scheme of soil regionalization of Russia; boundaries between the regions were somewhat modified. Within each region, we specified from 15 (Northern Karelia) to 32 (Southern Karelia) individual soilscapes and proposed soil-topographic and soil-lithological relationships for every soilscape. Further field verification is needed to adjust the models.
NASA Astrophysics Data System (ADS)
Kim, Seul-Gi; Hu, Qicheng; Nam, Ki-Bong; Kim, Mun Ja; Yoo, Ji-Beom
2018-04-01
Large-scale graphitic thin film with high thickness uniformity needs to be developed for industrial applications. Graphitic films with thicknesses ranging from 3 to 20 nm have rarely been reported, and achieving the thickness uniformity in that range is a challenging task. In this study, a process for growing 20 nm-thick graphite films on Ni with improved thickness uniformity is demonstrated and compared with the conventional growth process. In the film grown by the process, the surface roughness and coverage were improved and no wrinkles were observed. Observations of the film structure reveal the reasons for the improvements and growth mechanisms.
NASA Technical Reports Server (NTRS)
Rohatgi, Naresh K.; Ingham, John D.
1992-01-01
An assessment approach for accurate evaluation of bioprocesses for large-scale production of industrial chemicals is presented. Detailed energy-economic assessments of a potential esterification process were performed, where ethanol vapor in the presence of water from a bioreactor is catalytically converted to ethyl acetate. Results show that such processes are likely to become more competitive as the cost of substrates decreases relative to petrolium costs. A commercial ASPEN process simulation provided a reasonably consistent comparison with energy economics calculated using JPL developed software. Detailed evaluations of the sensitivity of production cost to material costs and annual production rates are discussed.
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Tran, Daniel Q.; Rabideau, Gregg R.; Schaffer, Steven R.
2011-01-01
Software has been designed to schedule remote sensing with the Earth Observing One spacecraft. The software attempts to satisfy as many observation requests as possible considering each against spacecraft operation constraints such as data volume, thermal, pointing maneuvers, and others. More complex constraints such as temperature are approximated to enable efficient reasoning while keeping the spacecraft within safe limits. Other constraints are checked using an external software library. For example, an attitude control library is used to determine the feasibility of maneuvering between pairs of observations. This innovation can deal with a wide range of spacecraft constraints and solve large scale scheduling problems like hundreds of observations and thousands of combinations of observation sequences.
Lawson, Katrina J; Rodwell, John J; Noblet, Andrew J
2012-06-01
The risk of work-related depression in Australia was estimated based on a survey of 631 police officers. Psychological wellbeing and psychological distress items were mapped onto a measure of depression to identify optimal cutoff points. Based on a sample of police officers, Australian workers, in general, are at risk of depression when general psychological wellbeing is considerably compromised. Large-scale estimation of work-related depression in the broader population of employed persons in Australia is reasonable. The relatively high prevalence of depression among police officers emphasizes the need to examine prevalence rates of depression among Australian employees.
Accelerating cross-validation with total variation and its application to super-resolution imaging
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Ikeda, Shiro; Akiyama, Kazunori; Kabashima, Yoshiyuki
2017-12-01
We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ_1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.
Mission to Planet Earth. The living ocean: Observing ocean color from space
NASA Technical Reports Server (NTRS)
1994-01-01
Measurements of ocean color are part of NASA's Mission to Planet Earth, which will assess how the global environment is changing. Using the unique perspective available from space, NASA will observe, monitor, and study large-scale environmental processes, focusing on quantifying climate change. NASA will distribute the results of these studies to researchers worldwide to furnish a basis for informed decisions on environmental protection and economic policy. This information packet includes discussion on the reasons for measuring ocean color, the carbon cycle and ocean color, priorities for global climate research, and SeWiFS (sea-viewing wide field-of-view sensor) global ocean color measurements.
Fundamental research in artificial intelligence at NASA
NASA Technical Reports Server (NTRS)
Friedland, Peter
1990-01-01
This paper describes basic research at NASA in the field of artificial intelligence. The work is conducted at the Ames Research Center and the Jet Propulsion Laboratory, primarily under the auspices of the NASA-wide Artificial Intelligence Program in the Office of Aeronautics, Exploration and Technology. The research is aimed at solving long-term NASA problems in missions operations, spacecraft autonomy, preservation of corporate knowledge about NASA missions and vehicles, and management/analysis of scientific and engineering data. From a scientific point of view, the research is broken into the categories of: planning and scheduling; machine learning; and design of and reasoning about large-scale physical systems.
ERIC Educational Resources Information Center
Guo, Hongwen; Liu, Jinghua; Curley, Edward; Dorans, Neil
2012-01-01
This study examines the stability of the "SAT Reasoning Test"™ score scales from 2005 to 2010. A 2005 old form (OF) was administered along with a 2010 new form (NF). A new conversion for OF was derived through direct equipercentile equating. A comparison of the newly derived and the original OF conversions showed that Critical Reading…
Specificity of meta-emotion effects on moral decision-making.
Koven, Nancy S
2011-10-01
A recently proposed dual process theory of moral decision-making posits that utilitarian reasoning (approving of harmful actions that maximize good consequences) is the result of cognitive control of emotion. This suggests that deficits in emotional awareness will contribute to increased utilitarianism. The present study explored the relative contributions of the different facets of alexithymia and the closely related constructs of emotional intelligence and mood awareness to utilitarian decision making. Participants (N = 86) completed the Toronto Alexithymia Scale, Trait Meta Mood Scale, the Mood Awareness Scale, and a series of high-conflict, personal moral dilemmas validated by Greene et al. (2008). A brief neuropsychological battery was also administered to assess the possible confounds of verbal reasoning and abstract thinking ability. Principal components analysis revealed two latent factors-clarity of emotion and attention to emotion-which cut across all three meta-emotion instruments. Of these, low clarity of emotion-reflecting difficulty in reasoning thoughtfully about one's emotions-predicted utilitarian outcomes and provided unique variance beyond that of verbal and abstract reasoning abilities. Results are discussed in the context of individual differences in emotion regulation.
Simulation of fatigue crack growth under large scale yielding conditions
NASA Astrophysics Data System (ADS)
Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann
2010-07-01
A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.
Mass killings and detection of impacts
NASA Technical Reports Server (NTRS)
Mclaren, Digby J.
1988-01-01
Highly energetic bolide impacts occur and their flux is known. For larger bodies the energy release is greater than for any other short-term global phenomenon. Such impacts produce or release a large variety of shock induced changes including major atmospheric, sedimentologic, seismic and volcanic events. These events must necessarily leave a variety of records in the stratigraphic column, including mass killings resulting in major changes in population density and reduction or extinction of many taxonomic groups, followed by characteristic patterns of faunal and flora replacement. Of these effects, mass killings, marked by large-scale loss of biomass, are the most easily detected evidence in the field but must be manifest on a near-global scale. Such mass killings that appear to be approximately synchronous and involve disappearance of biomass at a bedding plane in many sedimentologically independent sections globally suggest a common cause and probable synchroneity. Mass killings identify an horizon which may be examined for evidence of cause. Geochemical markers may be ephemeral and absence may not be significant. There appears to be no reason why ongoing phenomena such as climate and sea-level changes are primary causes of anomolous episodic events.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code
NASA Astrophysics Data System (ADS)
Sabotinov, Luben; Chevrier, Patrick
The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.
Water circulation and global mantle dynamics: Insight from numerical modeling
NASA Astrophysics Data System (ADS)
Nakagawa, Takashi; Nakakuki, Tomoeki; Iwamori, Hikaru
2015-05-01
We investigate water circulation and its dynamical effects on global-scale mantle dynamics in numerical thermochemical mantle convection simulations. Both dehydration-hydration processes and dehydration melting are included. We also assume the rheological properties of hydrous minerals and density reduction caused by hydrous minerals. Heat transfer due to mantle convection seems to be enhanced more effectively than water cycling in the mantle convection system when reasonable water dependence of viscosity is assumed, due to effective slab dehydration at shallow depths. Water still affects significantly the global dynamics by weakening the near-surface oceanic crust and lithosphere, enhancing the activity of surface plate motion compared to dry mantle case. As a result, including hydrous minerals, the more viscous mantle is expected with several orders of magnitude compared to the dry mantle. The average water content in the whole mantle is regulated by the dehydration-hydration process. The large-scale thermochemical anomalies, as is observed in the deep mantle, is found when a large density contrast between basaltic material and ambient mantle is assumed (4-5%), comparable to mineral physics measurements. Through this study, the effects of hydrous minerals in mantle dynamics are very important for interpreting the observational constraints on mantle convection.
A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.
Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf
2014-10-27
We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.
Mass killings and detection of impacts
NASA Astrophysics Data System (ADS)
McLaren, Digby J.
Highly energetic bolide impacts occur and their flux is known. For larger bodies the energy release is greater than for any other short-term global phenomenon. Such impacts produce or release a large variety of shock induced changes including major atmospheric, sedimentologic, seismic and volcanic events. These events must necessarily leave a variety of records in the stratigraphic column, including mass killings resulting in major changes in population density and reduction or extinction of many taxonomic groups, followed by characteristic patterns of faunal and flora replacement. Of these effects, mass killings, marked by large-scale loss of biomass, are the most easily detected evidence in the field but must be manifest on a near-global scale. Such mass killings that appear to be approximately synchronous and involve disappearance of biomass at a bedding plane in many sedimentologically independent sections globally suggest a common cause and probable synchroneity. Mass killings identify an horizon which may be examined for evidence of cause. Geochemical markers may be ephemeral and absence may not be significant. There appears to be no reason why ongoing phenomena such as climate and sea-level changes are primary causes of anomolous episodic events.
Hamilton, Jada G.; Breen, Nancy; Klabunde, Carrie N.; Moser, Richard P.; Leyva, Bryan; Breslau, Erica S.; Kobrin, Sarah C.
2014-01-01
Large-scale surveys that assess cancer prevention and control behaviors are a readily-available, rich resource for public health researchers. Although these data are used by a subset of researchers who are familiar with them, their potential is not fully realized by the research community for reasons including lack of awareness of the data, and limited understanding of their content, methodology, and utility. Until now, no comprehensive resource existed to describe and facilitate use of these data. To address this gap and maximize use of these data, we catalogued the characteristics and content of four surveys that assessed cancer screening behaviors in 2005, the most recent year with concurrent periods of data collection: the National Health Interview Survey, Health Information National Trends Survey, Behavioral Risk Factor Surveillance System, and California Health Interview Survey. We documented each survey's characteristics, measures of cancer screening, and relevant correlates; examined how published studies (n=78) have used the surveys’ cancer screening data; and reviewed new cancer screening constructs measured in recent years. This information can guide researchers in deciding how to capitalize on the opportunities presented by these data resources. PMID:25300474
A Model of the Turbulent Electric Dynamo in Multi-Phase Media
NASA Astrophysics Data System (ADS)
Dementyeva, Svetlana; Mareev, Evgeny
2016-04-01
Many terrestrial and astrophysical phenomena witness the conversion of kinetic energy into electric energy (the energy of the quasi-stationary electric field) in conducting media, which is natural to treat as manifestations of electric dynamo by analogy with well-known theory of magnetic dynamo. Such phenomena include thunderstorms and lightning in the Earth's atmosphere and atmospheres of other planets, electric activity caused by dust storms in terrestrial and Martian atmospheres, snow storms, electrical discharges occurring in technological setups, connected with intense mixing of aerosol particles like in the milling industry. We have developed a model of the large-scale turbulent electric dynamo in a weakly conducting medium, containing two heavy-particle components. We have distinguished two main classes of charging mechanisms (inductive and non-inductive) in accordance with the dependence or independence of the electric charge, transferred during a particle collision, on the electric field intensity and considered the simplified models which demonstrate the possibility of dynamo realization and its specific peculiarities for these mechanisms. Dynamo (the large-scale electric field growth) appears due to the charge separation between the colliding and rebounding particles. This process is may be greatly intensified by the turbulent mixing of particles with different masses and, consequently, different inertia. The particle charge fluctuations themselves (small-scale dynamo), however, do not automatically mean growth of the large-scale electric field without a large-scale asymmetry. Such an asymmetry arises due to the dependence of the transferred charge magnitude on the electric field intensity in the case of the inductive mechanism of charge separation, or due to the gravity and convection for non-inductive mechanisms. We have found that in the case of the inductive mechanism the large-scale dynamo occurs if the medium conductivity is small enough while the electrification process determined by the turbulence intensity and particles sizes is strong enough. The electric field strength grows exponentially. For the non-inductive mechanism we have found the conditions when the electric field strength grows but linearly in time. Our results show that turbulent electric dynamo could play a substantial role in the electrification processes for different mechanisms of charge generation and separation. Thunderstorms and lightning are the most frequent and spectacular manifestations of electric dynamo in the atmosphere, but turbulent electric dynamo may also be the reason of electric discharges occurring in dust and snow storms or even in technological setups with intense mixing of small particles.
ERIC Educational Resources Information Center
van der Graaf, Joep; Segers, Eliane; Verhoeven, Ludo
2015-01-01
A dynamic assessment tool was developed and validated using Mokken scale analysis to assess the extent to which kindergartners are able to construct unconfounded experiments, an essential part of scientific reasoning. Scientific reasoning is one of the learning processes happening within science education. A commonly used, hands-on,…
A manganese-hydrogen battery with potential for grid-scale energy storage
NASA Astrophysics Data System (ADS)
Chen, Wei; Li, Guodong; Pei, Allen; Li, Yuzhang; Liao, Lei; Wang, Hongxia; Wan, Jiayu; Liang, Zheng; Chen, Guangxu; Zhang, Hao; Wang, Jiangyan; Cui, Yi
2018-05-01
Batteries including lithium-ion, lead-acid, redox-flow and liquid-metal batteries show promise for grid-scale storage, but they are still far from meeting the grid's storage needs such as low cost, long cycle life, reliable safety and reasonable energy density for cost and footprint reduction. Here, we report a rechargeable manganese-hydrogen battery, where the cathode is cycled between soluble Mn2+ and solid MnO2 with a two-electron reaction, and the anode is cycled between H2 gas and H2O through well-known catalytic reactions of hydrogen evolution and oxidation. This battery chemistry exhibits a discharge voltage of 1.3 V, a rate capability of 100 mA cm-2 (36 s of discharge) and a lifetime of more than 10,000 cycles without decay. We achieve a gravimetric energy density of 139 Wh kg-1 (volumetric energy density of 210 Wh l-1), with the theoretical gravimetric energy density of 174 Wh kg-1 (volumetric energy density of 263 Wh l-1) in a 4 M MnSO4 electrolyte. The manganese-hydrogen battery involves low-cost abundant materials and has the potential to be scaled up for large-scale energy storage.
MJO: Asymptotically-Nondivergent Nonlinear Wave?: A Review
NASA Astrophysics Data System (ADS)
Yano, J. I.
2014-12-01
MJO is often considered a convectively-coupled wave. The present talk is going to argue that it is best understood primarily as a nonlinear solitary wave dominated by vorticity. Role of convection is secondary,though likely catalytic. According to Charney's (1963) scale analysis, the large-scale tropical circulations are nondivergent to the leading order, i.e., dominated by rotational flows. Yano et al (2009) demonstrate indeed that is the case for a period dominated by three MJO events. The scale analysis of Yano and Bonazzola (2009, JAS) demonstrates such an asymptotically nondivergent regime is a viable alternative to the traditionally-believed equatorial-wave regime. Wedi and Smolarkiewicz (2010, JAS) in turn, show by numerical computations of a dry system that a MJO-like oscillation for a similar period can indeed be generated by free solitary nonlinear equatorial Rossby-wave dynamicswithout any convective forcing to a system. Unfortunately, this perspective is slow to be accepted with people's mind so much fixed on the role of convection. This situation may be compared to a slow historical process of acceptance of Eady and Charney's baroclinicinstability simply because it does not invoke any convection Ironically, once the nonlinear free-wave view for MJO is accepted, interpretations can more easily be developed for a recent series of numerical model experiments under a global channel configuration overthe tropics with a high-resolution of 5-50 km with or without convection parameterization. All those experiments tend to reproduce observed large-scale circulations associated with MJO rather well, though most of time, they fail to reproduce convective coherency associated with MJO.These large-scale circulations appear to be generated by lateral forcing imposed at the latitudinal walls. These lateral boundaries are reasonably far enough (30NS) to induce any direct influence to the tropics. There is no linear dry equatorial wave that supports this period either. In Wedi and Smolarkiewicz's analysis, such a lateral forcing is essential in order to obtain their nonlinear solitary wave solution. Thus is the leading-order solution for MJO in the same sense as the linear baroclinic instability is a leading-order solution to the midlatitude synoptic-scale storm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Tristan G.; Chapman, Nicholas L.; Novak, Giles
2014-04-01
The Balloon-borne Large Aperture Submillimeter Telescope for Polarimetry (BLASTPol) was created by adding polarimetric capability to the BLAST experiment that was flown in 2003, 2005, and 2006. BLASTPol inherited BLAST's 1.8 m primary and its Herschel/SPIRE heritage focal plane that allows simultaneous observation at 250, 350, and 500 μm. We flew BLASTPol in 2010 and again in 2012. Both were long duration Antarctic flights. Here we present polarimetry of the nearby filamentary dark cloud Lupus I obtained during the 2010 flight. Despite limitations imposed by the effects of a damaged optical component, we were able to clearly detect submillimeter polarizationmore » on degree scales. We compare the resulting BLASTPol magnetic field map with a similar map made via optical polarimetry. (The optical data were published in 1998 by J. Rizzo and collaborators.) The two maps partially overlap and are reasonably consistent with one another. We compare these magnetic field maps to the orientations of filaments in Lupus I, and we find that the dominant filament in the cloud is approximately perpendicular to the large-scale field, while secondary filaments appear to run parallel to the magnetic fields in their vicinities. This is similar to what is observed in Serpens South via near-IR polarimetry, and consistent with what is seen in MHD simulations by F. Nakamura and Z. Li.« less
Let them fall where they may: congruence analysis in massive phylogenetically messy data sets.
Leigh, Jessica W; Schliep, Klaus; Lopez, Philippe; Bapteste, Eric
2011-10-01
Interest in congruence in phylogenetic data has largely focused on issues affecting multicellular organisms, and animals in particular, in which the level of incongruence is expected to be relatively low. In addition, assessment methods developed in the past have been designed for reasonably small numbers of loci and scale poorly for larger data sets. However, there are currently over a thousand complete genome sequences available and of interest to evolutionary biologists, and these sequences are predominantly from microbial organisms, whose molecular evolution is much less frequently tree-like than that of multicellular life forms. As such, the level of incongruence in these data is expected to be high. We present a congruence method that accommodates both very large numbers of genes and high degrees of incongruence. Our method uses clustering algorithms to identify subsets of genes based on similarity of phylogenetic signal. It involves only a single phylogenetic analysis per gene, and therefore, computation time scales nearly linearly with the number of genes in the data set. We show that our method performs very well with sets of sequence alignments simulated under a wide variety of conditions. In addition, we present an analysis of core genes of prokaryotes, often assumed to have been largely vertically inherited, in which we identify two highly incongruent classes of genes. This result is consistent with the complexity hypothesis.
On the power law of passive scalars in turbulence
NASA Astrophysics Data System (ADS)
Gotoh, Toshiyuki; Watanabe, Takeshi
2015-11-01
It has long been considered that the moments of the scalar increment with separation distance r obey power law with scaling exponents in the inertial convective range and the exponents are insensitive to variation of pumping of scalar fluctuations at large scales, thus the scaling exponents are universal. We examine the scaling behavior of the moments of increments of passive scalars 1 and 2 by using DNS up to the grid points of 40963. They are simultaneously convected by the same isotropic steady turbulence atRλ = 805 , but excited by two different methods. Scalar 1 is excited by the random scalar injection which is isotropic, Gaussian and white in time at law wavenumber band, while Scalar 2 is excited by the uniform mean scalar gradient. It is found that the local scaling exponents of the scalar 1 has a logarithmic correction, meaning that the moments of the scalar 1 do not obey simple power law. On the other hand, the moments of the scalar 2 is found to obey the well developed power law with exponents consistent with those in the literature. Physical reasons for the difference are explored. Grants-in-Aid for Scientific Research 15H02218 and 26420106, NIFS14KNSS050, HPCI project hp150088 and hp140024, JHPCN project jh150012.
WISC-IV and WIAT-II profiles in children with high-functioning autism.
Mayes, Susan Dickerson; Calhoun, Susan L
2008-03-01
Children with high-functioning autism earned above normal scores on the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) Perceptual Reasoning and Verbal Comprehension Indexes and below normal scores on the Working Memory and Processing Speed Indexes and Wechsler Individual Achievement Test-Second Edition (WIAT-II) Written Expression. Full Scale IQ (FSIQ) and reading and math scores were similar to the norm. Profiles were consistent with previous WISC-III research, except that the new WISC-IV motor-free visual reasoning subtests (Matrix Reasoning and Picture Concepts) were the highest of the nonverbal subtests. The WISC-IV may be an improvement over the WISC-III for children with high-functioning autism because it captures their visual reasoning strength, while identifying their attention, graphomotor, and processing speed weaknesses. FSIQ was the best single predictor of academic achievement.
Kandala, Sridhar; Petersen, Steven E.; Povinelli, Daniel J.
2015-01-01
Understanding the underpinnings of social responsiveness and theory of mind (ToM) will enhance our knowledge of autism spectrum disorder (ASD). We hypothesize that higher-order relational reasoning (higher-order RR: reasoning necessitating integration of relationships among multiple variables) is necessary but not sufficient for ToM, and that social responsiveness varies independently of higher-order RR. A pilot experiment tested these hypotheses in n = 17 children, 3–14, with and without ASD. No child failing 2nd-order RR passed a false belief ToM test. Contrary to prediction, Social Responsiveness Scale scores did correlate with 2nd-order RR performance, likely due to sample characteristics. It is feasible to translate this comparative cognition-inspired line of inquiry for full-scale studies of ToM, higher-order RR, and social responsiveness in ASD. PMID:25630898
Pruett, John R; Kandala, Sridhar; Petersen, Steven E; Povinelli, Daniel J
2015-07-01
Understanding the underpinnings of social responsiveness and theory of mind (ToM) will enhance our knowledge of autism spectrum disorder (ASD). We hypothesize that higher-order relational reasoning (higher-order RR: reasoning necessitating integration of relationships among multiple variables) is necessary but not sufficient for ToM, and that social responsiveness varies independently of higher-order RR. A pilot experiment tested these hypotheses in n = 17 children, 3-14, with and without ASD. No child failing 2nd-order RR passed a false belief ToM test. Contrary to prediction, Social Responsiveness Scale scores did correlate with 2nd-order RR performance, likely due to sample characteristics. It is feasible to translate this comparative cognition-inspired line of inquiry for full-scale studies of ToM, higher-order RR, and social responsiveness in ASD.
Spatiotemporal Thinking in the Geosciences
NASA Astrophysics Data System (ADS)
Shipley, T. F.; Manduca, C. A.; Ormand, C. J.; Tikoff, B.
2011-12-01
Reasoning about spatial relations is a critical skill for geoscientists. Within the geosciences different disciplines may reason about different sorts of relationships. These relationships may span vastly different spatial and temporal scales (from the spatial alignment in atoms in crystals to the changes in the shape of plates). As part of work in a research center on spatial thinking in STEM education, we have been working to classify the spatial skills required in geology, develop tests for each spatial skill, and develop the cognitive science tools to promote the critical spatial reasoning skills. Research in psychology, neurology and linguistics supports a broad classification of spatial skills along two dimensions: one versus many objects (which roughly translates to object- focused and navigation focused skills) and static versus dynamic spatial relations. The talk will focus on the interaction of space and time in spatial cognition in the geosciences. We are working to develop measures of skill in visualizing spatiotemporal changes. A new test developed to measure visualization of brittle deformations will be presented. This is a skill that has not been clearly recognized in the cognitive science research domain and thus illustrates the value of interdisciplinary work that combines geosciences with cognitive sciences. Teaching spatiotemporal concepts can be challenging. Recent theoretical work suggests analogical reasoning can be a powerful tool to aid student learning to reason about temporal relations using spatial skills. Recent work in our lab has found that progressive alignment of spatial and temporal scales promotes accurate reasoning about temporal relations at geological time scales.
Murphy, Lexa K; Compas, Bruce E; Reeslund, Kristen L; Gindville, Melissa C; Mah, May Ling; Markham, Larry W; Jordan, Lori C
2017-01-01
The objective of this study is to investigate cognitive and attentional function in adolescents and young adults with operated congenital heart disease. Previous research has indicated that children with congenital heart disease have deficits in broad areas of cognitive function. However, less attention has been given to survivors as they grow into adolescence and early adulthood. The participants were 18 non-syndromic adolescents and young adults with tetralogy of Fallot and d-transposition of the great arteries that required cardiac surgery before the age of 5 years, and 18 healthy, unaffected siblings (11-22 years of age for both groups). Cases with congenital heart disease and their siblings were administered Wechsler Intelligence scales and reported attention problems using the Achenbach System of Empirically Based Assessments. Cases were compared to both healthy siblings and established norms. Cases performed significantly lower than siblings on full scale IQ and processing speed, and significantly lower than norms on perceptual reasoning. Cases also reported more attention problems compared to both siblings and norms. Effect sizes varied with medium-to-large effects for processing speed, perceptual reasoning, working memory, and attention problems. Findings suggest that neurocognitive function may continue to be affected for congenital heart disease survivors in adolescence and young adulthood, and that comparisons to established norms may underestimate neurocognitive vulnerabilities.
Mapping canopy gap fraction and leaf area index at continent-scale from satellite lidar
NASA Astrophysics Data System (ADS)
Mahoney, C.; Hopkinson, C.; Held, A. A.
2015-12-01
Information on canopy cover is essential for understanding spatial and temporal variability in vegetation biomass, local meteorological processes and hydrological transfers within vegetated environments. Gap fraction (GF), an index of canopy cover, is often derived over large areas (100's km2) via airborne laser scanning (ALS), estimates of which are reasonably well understood. However, obtaining country-wide estimates is challenging due to the lack of spatially distributed point cloud data. The Geoscience Laser Altimeter System (GLAS) removes spatial limitations, however, its large footprint nature and continuous waveform data measurements make derivations of GF challenging. ALS data from 3 Australian sites are used as a basis to scale-up GF estimates to GLAS footprint data by the use of a physically-based Weibull function. Spaceborne estimates of GF are employed in conjunction with supplementary predictor variables in the predictive Random Forest algorithm to yield country-wide estimates at a 250 m spatial resolution; country-wide estimates are accompanied with uncertainties at the pixel level. Preliminary estimates of effective Leaf Area Index (eLAI) are also presented by converting GF via the Beer-Lambert law, where an extinction coefficient of 0.5 is employed; deemed acceptable at such spatial scales. The need for such wide-scale quantification of GF and eLAI are key in the assessment and modification of current forest management strategies across Australia. Such work also assists Australia's Terrestrial Ecosystem Research Network (TERN), a key asset to policy makers with regards to the management of the national ecosystem, in fulfilling their government issued mandates.
A network property necessary for concentration robustness
NASA Astrophysics Data System (ADS)
Eloundou-Mbebi, Jeanne M. O.; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran
2016-10-01
Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications.
Urban parasitology: visceral leishmaniasis in Brazil.
Harhay, Michael O; Olliaro, Piero L; Costa, Dorcas Lamounier; Costa, Carlos Henrique Nery
2011-09-01
Since the early 1980s, visceral leishmaniasis (VL) which is, in general, a rural zoonotic disease, has spread to the urban centers of the north, and now the south and west of Brazil. The principal drivers differ between cities, though human migration, large urban canid populations (animal reservoir), and a decidedly peripatetic and adaptable sand fly vector are the primary forces. The exact number of urban cases remains unclear as a result of challenges with surveillance. However, the number of urban cases registered continues to increase annually. Most control initiatives (e.g. culling infected dogs and household spraying to kill the sand fly) could be effective, but have proven hard to maintain at large scales due to logistical, financial and other reasons. In this article, the urbanization of VL in Brazil is reviewed, touching on these and other topics related to controlling VL within and outside Brazil. Copyright © 2011 Elsevier Ltd. All rights reserved.
Forbes, Barbara; Kepe, Thembela
2015-02-01
Agriculture's large share of Tanzanian GDP and the large percentage of rural poor engaged in the sector make it a focus for many development projects that see it as an area of attention for reducing rural poverty. This paper uses a case of the Kamachumu community, where a dairy cow loan project was implemented using the heifer-in-trust (HIT) model. This study finds that productivity is limited by how the cows are being managed, particularly with many animals not having ad lib access to drinking water. The paper explores reasons why farmers do or do not provide their cows with unlimited access to drinking water. The study concludes that there are many barriers farmers face, including water accessibility, education and training, infrastructure, simple negligence, and security. These results suggest an increase in extension services and national and local livestock policies that consider the specific realities of small-scale dairy farmers.
A network property necessary for concentration robustness.
Eloundou-Mbebi, Jeanne M O; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran
2016-10-19
Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications.
A network property necessary for concentration robustness
Eloundou-Mbebi, Jeanne M. O.; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran
2016-01-01
Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications. PMID:27759015
WIC's promotion of infant formula in the United States
Kent, George
2006-01-01
Background The United States' Special Supplemental Nutrition Program for Women, Infants and Children (WIC) distributes about half the infant formula used in the United States at no cost to the families. This is a matter of concern because it is known that feeding with infant formula results in worse health outcomes for infants than breastfeeding. Discussion The evidence that is available indicates that the WIC program has the effect of promoting the use of infant formula, thus placing infants at higher risk. Moreover, the program violates the widely accepted principles that have been set out in the International Code of Marketing of Breast-milk Substitutes and in the human right to adequate food. Summary There is no good reason for an agency of government to distribute large quantities of free infant formula. It is recommended that the large-scale distribution of free infant formula by the WIC program should be phased out. PMID:16722534
McCall, Catherine; McCall, W Vaughn
2012-10-01
Psychiatric medications such as antidepressants, antipsychotics, and anticonvulsants are commonly prescribed by physicians for the off-label use of improving sleep. Reasons for preferential prescription of these medications over FDA-approved insomnia drugs may include a desire to treat concurrent sleep problems and psychiatric illness with a single medication, and/or an attempt to avoid hypnotic drugs due to their publicized side effects. However, there have been few large studies demonstrating the efficacy and safety of most off-label medications prescribed to treat insomnia. In addition, many of these medications have significant known side effect profiles themselves. Here we review the pertinent research studies published in recent years on antidepressant, antipsychotic, and anticonvulsant medications frequently prescribed for sleep difficulties. Although there have been few large-scale studies for most of these medications, some may be appropriate in the treatment of sleep issues in specific well-defined populations.
Li, Yan; Wang, Dejun; Zhang, Shaoyi
2014-01-01
Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM). Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS) technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM) to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters. PMID:24634612
Killer whales and whaling: the scavenging hypothesis.
Whitehead, Hal; Reeves, Randall
2005-12-22
Killer whales (Orcinus orca) frequently scavenged from the carcasses produced by whalers. This practice became especially prominent with large-scale mechanical whaling in the twentieth century, which provided temporally and spatially clustered floating carcasses associated with loud acoustic signals. The carcasses were often of species of large whale preferred by killer whales but that normally sink beyond their diving range. In the middle years of the twentieth century floating whaled carcasses were much more abundant than those resulting from natural mortality of whales, and we propose that scavenging killer whales multiplied through diet shifts and reproduction. During the 1970s the numbers of available carcasses fell dramatically with the cessation of most whaling (in contrast to a reasonably stable abundance of living whales), and the scavenging killer whales needed an alternative source of nutrition. Diet shifts may have triggered declines in other prey species, potentially affecting ecosystems, as well as increasing direct predation on living whales.
Do faults trigger folding in the lithosphere?
NASA Astrophysics Data System (ADS)
Gerbault, Muriel; Burov, Eugenii B.; Poliakov, Alexei N. B.; Daignières, Marc
A number of observations reveal large periodic undulations within the oceanic and continental lithospheres. The question if these observations are the result of large-scale compressive instabilities, i.e. buckling, remains open. In this study, we support the buckling hypothesis by direct numerical modeling. We compare our results with the data on three most proeminent cases of the oceanic and continental folding-like deformation (Indian Ocean, Western Gobi (Central Asia) and Central Australia). We demonstrate that under reasonable tectonic stresses, folds can develop from brittle faults cutting through the brittle parts of a lithosphere. The predicted wavelengths and finite growth rates are in agreement with observations. We also show that within a continental lithosphere with thermal age greater than 400 My, either a bi-harmonic mode (two superimposed wavelengths, crustal and mantle one) or a coupled mode (mono-layer deformation) of inelastic folding can develop, depending on the strength and thickness of the lower crust.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
NASA Astrophysics Data System (ADS)
Deshotel, M.; Habib, E. H.
2016-12-01
There is an increasing desire by the water education community to use emerging research resources and technological advances in order to reform current educational practices. Recent years have witnessed some exemplary developments that tap into emerging hydrologic modeling and data sharing resources, innovative digital and visualization technologies, and field experiences. However, such attempts remain largely at the scale of individual efforts and fall short of meeting scalability and sustainability solutions. This can be attributed to number of reasons such as inadequate experience with modeling and data-based educational developments, lack of faculty time to invest in further developments, and lack of resources to further support the project. Another important but often-overlooked reason is the lack of adequate insight on the actual needs of end-users of such developments. Such insight is highly critical to inform how to scale and sustain educational innovations. In this presentation, we share with the hydrologic community experiences gathered from an ongoing experiment where the authors engaged in a hypothesis-driven, customer-discovery process to inform the scalability and sustainability of educational innovations in the field of hydrology and water resources education. The experiment is part of a program called Innovation Corps for Learning (I-Corps L). This program follows a business model approach where a value proposition is initially formulated on the educational innovation. The authors then engaged in a hypothesis-validation process through an intense series of customer interviews with different segments of potential end users, including junior/senior students, student interns, and hydrology professors. The authors also sought insight from engineering firms by interviewing junior engineers and their supervisors to gather feedback on the preparedness of graduating engineers as they enter the workforce in the area of water resources. Exploring the large landscape of potential users is critical in formulating a user-driven approach that can inform the innovation development. The presentation shares the results of this experiment and the insight gained and discusses how such information can inform the community on sustaining and scaling hydrology educational developments.
NASA Astrophysics Data System (ADS)
Zhu, Q.; Zhuang, Q.; Henze, D.; Bowman, K.; Chen, M.; Liu, Y.; He, Y.; Matsueda, H.; Machida, T.; Sawa, Y.; Oechel, W.
2014-09-01
Regional net carbon fluxes of terrestrial ecosystems could be estimated with either biogeochemistry models by assimilating surface carbon flux measurements or atmospheric CO2 inversions by assimilating observations of atmospheric CO2 concentrations. Here we combine the ecosystem biogeochemistry modeling and atmospheric CO2 inverse modeling to investigate the magnitude and spatial distribution of the terrestrial ecosystem CO2 sources and sinks. First, we constrain a terrestrial ecosystem model (TEM) at site level by assimilating the observed net ecosystem production (NEP) for various plant functional types. We find that the uncertainties of model parameters are reduced up to 90% and model predictability is greatly improved for all the plant functional types (coefficients of determination are enhanced up to 0.73). We then extrapolate the model to a global scale at a 0.5° × 0.5° resolution to estimate the large-scale terrestrial ecosystem CO2 fluxes, which serve as prior for atmospheric CO2 inversion. Second, we constrain the large-scale terrestrial CO2 fluxes by assimilating the GLOBALVIEW-CO2 and mid-tropospheric CO2 retrievals from the Atmospheric Infrared Sounder (AIRS) into an atmospheric transport model (GEOS-Chem). The transport inversion estimates that: (1) the annual terrestrial ecosystem carbon sink in 2003 is -2.47 Pg C yr-1, which agrees reasonably well with the most recent inter-comparison studies of CO2 inversions (-2.82 Pg C yr-1); (2) North America temperate, Europe and Eurasia temperate regions act as major terrestrial carbon sinks; and (3) The posterior transport model is able to reasonably reproduce the atmospheric CO2 concentrations, which are validated against Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) CO2 concentration data. This study indicates that biogeochemistry modeling or atmospheric transport and inverse modeling alone might not be able to well quantify regional terrestrial carbon fluxes. However, combining the two modeling approaches and assimilating data of surface carbon flux as well as atmospheric CO2 mixing ratios might significantly improve the quantification of terrestrial carbon fluxes.
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
Differences in cognitive and emotional processes between persecutory and grandiose delusions.
Garety, Philippa A; Gittins, Matthew; Jolley, Suzanne; Bebbington, Paul; Dunn, Graham; Kuipers, Elizabeth; Fowler, David; Freeman, Daniel
2013-05-01
Cognitive models propose that cognitive and emotional processes, in the context of anomalies of experience, lead to and maintain delusions. No large-scale studies have investigated whether persecutory and grandiose delusions reflect differing contributions of reasoning and affective processes. This is complicated by their frequent cooccurrence in schizophrenia. We hypothesized that persecutory and grandiose subtypes would differ significantly in their associations with psychological processes. Participants were the 301 patients from the Psychological Prevention of Relapse in Psychosis Trial (ISRCTN83557988). Persecutory delusions were present in 192 participants, and grandiose delusions were present in 97, while 58 were rated as having delusions both of persecution and grandiosity. Measures of emotional and reasoning processes, at baseline only, were employed. A bivariate response model was used. Negative self-evaluations and depression and anxiety predicted a significantly increased chance of persecutory delusions whereas grandiose delusions were predicted by less negative self-evaluations and lower anxiety and depression, along with higher positive self and positive other evaluations. Reasoning biases were common in the whole group and in categorically defined subgroups with only persecutory delusions and only grandiose delusions; however, jumping to conclusions, and belief flexibility were significantly different in the 2 groups, the grandiose group having a higher likelihood of showing a reasoning bias than the persecutory group. The significant differences in the processes associated with these 2 delusion subtypes have implications for etiology and for the development of targeted treatment strategies.
To manage inland fisheries is to manage at the social-ecological watershed scale.
Nguyen, Vivian M; Lynch, Abigail J; Young, Nathan; Cowx, Ian G; Beard, T Douglas; Taylor, William W; Cooke, Steven J
2016-10-01
Approaches to managing inland fisheries vary between systems and regions but are often based on large-scale marine fisheries principles and thus limited and outdated. Rarely do they adopt holistic approaches that consider the complex interplay among humans, fish, and the environment. We argue that there is an urgent need for a shift in inland fisheries management towards holistic and transdisciplinary approaches that embrace the principles of social-ecological systems at the watershed scale. The interconnectedness of inland fisheries with their associated watershed (biotic, abiotic, and humans) make them extremely complex and challenging to manage and protect. For this reason, the watershed is a logical management unit. To assist management at this scale, we propose a framework that integrates disparate concepts and management paradigms to facilitate inland fisheries management and sustainability. We contend that inland fisheries need to be managed as social-ecological watershed system (SEWS). The framework supports watershed-scale and transboundary governance to manage inland fisheries, and transdisciplinary projects and teams to ensure relevant and applicable monitoring and research. We discuss concepts of social-ecological feedback and interactions of multiple stressors and factors within/between the social-ecological systems. Moreover, we emphasize that management, monitoring, and research on inland fisheries at the watershed scale are needed to ensure long-term sustainable and resilient fisheries. Copyright © 2016. Published by Elsevier Ltd.
Space and time scales of shoreline change at Cape Cod National Seashore, MA, USA
Allen, J.R.; LaBash, C.L.; List, J.H.; Kraus, Nicholas C.; McDougal, William G.
1999-01-01
Different processes cause patterns of shoreline change which are exhibited at different magnitudes and nested into different spatial and time scale hierarchies. The 77-km outer beach at Cape Cod National Seashore offers one of the few U.S. federally owned portions of beach to study shoreline change within the full range of sediment source and sink relationships, and barely affected by human intervention. 'Mean trends' of shoreline changes are best observed at long time scales but contain much spatial variation thus many sites are not equal in response. Long-term, earlier-noted trends are confirmed but the added quantification and resolution improves greatly the understanding of appropriate spatial and time scales of those processes driving bluff retreat and barrier island changes in both north and south depocenters. Shorter timescales allow for comparison of trends and uncertainty in shoreline change at local scales but are dependent upon some measure of storm intensity and seasonal frequency. Single-event shoreline survey results for one storm at daily intervals after the erosional phase suggest a recovery time for the system of six days, identifies three sites with abnormally large change, and that responses at these sites are spatially coherent for now unknown reasons. Areas near inlets are the most variable at all time scales. Hierarchies in both process and form are suggested.
Reasons for Substance Use: A Comparative Study of Alcohol Use in Tribals and Non-tribals
Sreeraj, V. S.; Prasad, Surjit; Khess, Christoday Raja Jayant; Uvais, N. A.
2012-01-01
Background: Consumption of alcohol has been attributed to different reasons by consumers. Attitude and knowledge about the substance and addiction can be influenced by the cultural background of the individual. The tribal population, where alcohol intake is culturally accepted, can have different beliefs and attributes causing one to take alcohol. This study attempts to examine the reasons for alcohol intake and the belief about addiction and their effect on the severity of addiction in people with a different ethnic background. Materials and Methods: The study was conducted at a Psychiatric institute with a cross-sectional design. The study population included patients hailing from the Jharkhand state, twenty each, belonging to tribal and non-tribal communities. Patients fulfilling the ICD 10 diagnostic criteria of mental and behavioral disorders due to the alcohol dependence syndrome, with active dependence, were taken, excluding those having any comorbidity or complications. The subjects were assessed with specially designed Sociodemographic-Clinical Performa, modified version of Reasons for Substance Use scale, Addiction Belief scale, and the Alcohol Dependence scale. Statistical Analysis and Results: A significantly high number of tribals cited reasons associated with social enhancement and coping with distressing emotions rather than individual enhancement, as a reason for consuming alcohol. Addiction was severe in those consuming alcohol to cope with distressing emotions. Belief in the free-will model was noted to be stronger across the cultures, without any correlation with the reason for intake. This cross-sectional study design, which was based on patients, cannot be easily generalized to the community. Conlusion: Societal acceptance and pressure as well as high emotional problems appears to be the major etiology leading to higher prevalce of substance depedence in tribals. Primary prevention should be planned to fit the needs of the ethnics. PMID:23439720
Agent based reasoning for the non-linear stochastic models of long-range memory
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Gontis, V.
2012-02-01
We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.
NASA Astrophysics Data System (ADS)
Sakaguchi, Koichi; Zeng, Xubin; Christoffersen, Bradley J.; Restrepo-Coupe, Natalia; Saleska, Scott R.; Brando, Paulo M.
2011-03-01
Recent development of general circulation models involves biogeochemical cycles: flows of carbon and other chemical species that circulate through the Earth system. Such models are valuable tools for future projections of climate, but still bear large uncertainties in the model simulations. One of the regions with especially high uncertainty is the Amazon forest where large-scale dieback associated with the changing climate is predicted by several models. In order to better understand the capability and weakness of global-scale land-biogeochemical models in simulating a tropical ecosystem under the present day as well as significantly drier climates, we analyzed the off-line simulations for an east central Amazon forest by the Community Land Model version 3.5 of the National Center for Atmospheric Research and its three independent biogeochemical submodels (CASA', CN, and DGVM). Intense field measurements carried out under Large Scale Biosphere-Atmosphere Experiment in Amazonia, including forest response to drought from a throughfall exclusion experiment, are utilized to evaluate the whole spectrum of biogeophysical and biogeochemical aspects of the models. Our analysis shows reasonable correspondence in momentum and energy turbulent fluxes, but it highlights three processes that are not in agreement with observations: (1) inconsistent seasonality in carbon fluxes, (2) biased biomass size and allocation, and (3) overestimation of vegetation stress to short-term drought but underestimation of biomass loss from long-term drought. Without resolving these issues the modeled feedbacks from the biosphere in future climate projections would be questionable. We suggest possible directions for model improvements and also emphasize the necessity of more studies using a variety of in situ data for both driving and evaluating land-biogeochemical models.
Controlling high-throughput manufacturing at the nano-scale
NASA Astrophysics Data System (ADS)
Cooper, Khershed P.
2013-09-01
Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.
Xie, Shaocheng; Klein, Stephen A.; Zhang, Minghua; ...
2006-10-05
[1] This study represents an effort to develop Single-Column Model (SCM) and Cloud-Resolving Model large-scale forcing data from a sounding array in the high latitudes. An objective variational analysis approach is used to process data collected from the Atmospheric Radiation Measurement Program (ARM) Mixed-Phase Arctic Cloud Experiment (M-PACE), which was conducted over the North Slope of Alaska in October 2004. In this method the observed surface and top of atmosphere measurements are used as constraints to adjust the sounding data from M-PACE in order to conserve column-integrated mass, heat, moisture, and momentum. Several important technical and scientific issues related tomore » the data analysis are discussed. It is shown that the analyzed data reasonably describe the dynamic and thermodynamic features of the Arctic cloud systems observed during M-PACE. Uncertainties in the analyzed forcing fields are roughly estimated by examining the sensitivity of those fields to uncertainties in the upper-air data and surface constraints that are used in the analysis. Impacts of the uncertainties in the analyzed forcing data on SCM simulations are discussed. Results from the SCM tests indicate that the bulk features of the observed Arctic cloud systems can be captured qualitatively well using the forcing data derived in this study, and major model errors can be detected despite the uncertainties that exist in the forcing data as illustrated by the sensitivity tests. Lastly, the possibility of using the European Center for Medium-Range Weather Forecasts analysis data to derive the large-scale forcing over the Arctic region is explored.« less
NASA Astrophysics Data System (ADS)
Yermolaev, Y. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Y.
2017-12-01
This work is a continuation of our previous article (Yermolaev et al. in J. Geophys. Res. 120, 7094, 2015), which describes the average temporal profiles of interplanetary plasma and field parameters in large-scale solar-wind (SW) streams: corotating interaction regions (CIRs), interplanetary coronal mass ejections (ICMEs including both magnetic clouds (MCs) and ejecta), and sheaths as well as interplanetary shocks (ISs). As in the previous article, we use the data of the OMNI database, our catalog of large-scale solar-wind phenomena during 1976 - 2000 (Yermolaev et al. in Cosmic Res., 47, 2, 81, 2009) and the method of double superposed epoch analysis (Yermolaev et al. in Ann. Geophys., 28, 2177, 2010a). We rescale the duration of all types of structures in such a way that the beginnings and endings for all of them coincide. We present new detailed results comparing pair phenomena: 1) both types of compression regions ( i.e. CIRs vs. sheaths) and 2) both types of ICMEs (MCs vs. ejecta). The obtained data allow us to suggest that the formation of the two types of compression regions responds to the same physical mechanism, regardless of the type of piston (high-speed stream (HSS) or ICME); the differences are connected to the geometry ( i.e. the angle between the speed gradient in front of the piston and the satellite trajectory) and the jumps in speed at the edges of the compression regions. In our opinion, one of the possible reasons behind the observed differences in the parameters in MCs and ejecta is that when ejecta are observed, the satellite passes farther from the nose of the area of ICME than when MCs are observed.
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F.-M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-07-01
The principles of the Polarization and Directionality of the Earth's Reflectance (POLDER) cloud droplet size retrieval requires that clouds are horizontally homogeneous. Nevertheless, the retrieval is applied by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using the POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval, and then analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-scale variability in droplet effective radius (CDR) can mislead both the CDR and effective variance (EV) retrievals. Nevertheless, the sub-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval is accurate using limited observations and is largely independent of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, the measurements in the primary rainbow region (137-145°) are used to ensure accurate large droplet (> 15 μm) retrievals and reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data for June 2008, the new CDR results are compared with the operational CDRs. The comparison show that the operational CDRs tend to be underestimated for large droplets. The reason is that the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Lastly, a sub-scale retrieval case is analyzed, illustrating that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size parameters from POLDER measurements.
Global simulations of protoplanetary disks with net magnetic flux. I. Non-ideal MHD case
NASA Astrophysics Data System (ADS)
Béthune, William; Lesur, Geoffroy; Ferreira, Jonathan
2017-04-01
Context. The planet-forming region of protoplanetary disks is cold, dense, and therefore weakly ionized. For this reason, magnetohydrodynamic (MHD) turbulence is thought to be mostly absent, and another mechanism has to be found to explain gas accretion. It has been proposed that magnetized winds, launched from the ionized disk surface, could drive accretion in the presence of a large-scale magnetic field. Aims: The efficiency and the impact of these surface winds on the disk structure is still highly uncertain. We present the first global simulations of a weakly ionized disk that exhibits large-scale magnetized winds. We also study the impact of self-organization, which was previously demonstrated only in non-stratified models. Methods: We perform numerical simulations of stratified disks with the PLUTO code. We compute the ionization fraction dynamically, and account for all three non-ideal MHD effects: ohmic and ambipolar diffusions, and the Hall drift. Simplified heating and cooling due to non-thermal radiation is also taken into account in the disk atmosphere. Results: We find that disks can be accreting or not, depending on the configuration of the large-scale magnetic field. Magnetothermal winds, driven both by magnetic acceleration and heating of the atmosphere, are obtained in the accreting case. In some cases, these winds are asymmetric, ejecting predominantly on one side of the disk. The wind mass loss rate depends primarily on the average ratio of magnetic to thermal pressure in the disk midplane. The non-accreting case is characterized by a meridional circulation, with accretion layers at the disk surface and decretion in the midplane. Finally, we observe self-organization, resulting in axisymmetric rings of density and associated pressure "bumps". The underlying mechanism and its impact on observable structures are discussed.
Groundwater Variability in a Sandstone Catchment and Linkages with Large-scale Climatic Circulatio
NASA Astrophysics Data System (ADS)
Hannah, D. M.; Lavers, D. A.; Bradley, C.
2015-12-01
Groundwater is a crucial water resource that sustains river ecosystems and provides public water supply. Furthermore, during periods of prolonged high rainfall, groundwater-dominated catchments can be subject to protracted flooding. Climate change and associated projected increases in the frequency and intensity of hydrological extremes have implications for groundwater levels. This study builds on previous research undertaken on a Chalk catchment by investigating groundwater variability in a UK sandstone catchment: the Tern in Shropshire. In contrast to the Chalk, sandstone is characterised by a more lagged response to precipitation inputs; and, as such, it is important to determine the groundwater behaviour and its links with the large-scale climatic circulation to improve process understanding of recharge, groundwater level and river flow responses to hydroclimatological drivers. Precipitation, river discharge and groundwater levels for borehole sites in the Tern basin over 1974-2010 are analysed as the target variables; and we use monthly gridded reanalysis data from the Twentieth Century Reanalysis Project (20CR). First, groundwater variability is evaluated and associations with precipitation / discharge are explored using monthly concurrent and lagged correlation analyses. Second, gridded 20CR reanalysis data are used in composite and correlation analyses to identify the regions of strongest climate-groundwater association. Results show that reasonably strong climate-groundwater connections exist in the Tern basin, with a several months lag. These lags are associated primarily with the time taken for recharge waters to percolate through to the groundwater table. The uncovered patterns improve knowledge of large-scale climate forcing of groundwater variability and may provide a basis to inform seasonal prediction of groundwater levels, which would be useful for strategic water resource planning.
Scaling of Advanced Theory-of-Mind Tasks.
Osterhaus, Christopher; Koerber, Susanne; Sodian, Beate
2016-11-01
Advanced theory-of-mind (AToM) development was investigated in three separate studies involving 82, 466, and 402 elementary school children (8-, 9-, and 10-year-olds). Rasch and factor analyses assessed whether common conceptual development underlies higher-order false-belief understanding, social understanding, emotion recognition, and perspective-taking abilities. The results refuted a unidimensional scale and revealed three distinct AToM factors: social reasoning, reasoning about ambiguity, and recognizing transgressions of social norms. Developmental progressions emerged for the two reasoning factors but not for recognizing transgressions of social norms. Both social factors were significantly related to inhibition, whereas language development only predicted performance on social reasoning. These findings suggest that AToM comprises multiple abilities, which are subject to distinct cognitive influences. Importantly, only two AToM factors involve conceptual development. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Perry, Jonathan M G; Cooke, Siobhán B; Runestad Connour, Jacqueline A; Burgess, M Loring; Ruff, Christopher B
2018-02-01
Body mass is an important component of any paleobiological reconstruction. Reliable skeletal dimensions for making estimates are desirable but extant primate reference samples with known body masses are rare. We estimated body mass in a sample of extinct platyrrhines and Fayum anthropoids based on four measurements of the articular surfaces of the humerus and femur. Estimates were based on a large extant reference sample of wild-collected individuals with associated body masses, including previously published and new data from extant platyrrhines, cercopithecoids, and hominoids. In general, scaling of joint dimensions is positively allometric relative to expectations of geometric isometry, but negatively allometric relative to expectations of maintaining equivalent joint surface areas. Body mass prediction equations based on articular breadths are reasonably precise, with %SEEs of 17-25%. The breadth of the distal femoral articulation yields the most reliable estimates of body mass because it scales similarly in all major anthropoid taxa. Other joints scale differently in different taxa; therefore, locomotor style and phylogenetic affinity must be considered when calculating body mass estimates from the proximal femur, proximal humerus, and distal humerus. The body mass prediction equations were applied to 36 Old World and New World fossil anthropoid specimens representing 11 taxa, plus two Haitian specimens of uncertain taxonomic affinity. Among the extinct platyrrhines studied, only Cebupithecia is similar to large, extant platyrrhines in having large humeral (especially distal) joints. Our body mass estimates differ from each other and from published estimates based on teeth in ways that reflect known differences in relative sizes of the joints and teeth. We prefer body mass estimators that are biomechanically linked to weight-bearing, and especially those that are relatively insensitive to differences in locomotor style and phylogenetic history. Whenever possible, extant reference samples should be chosen to match target fossils in joint proportionality. Copyright © 2017 Elsevier Ltd. All rights reserved.
Meteorological impact assessment of possible large scale irrigation in Southwest Saudi Arabia
NASA Astrophysics Data System (ADS)
Ter Maat, H. W.; Hutjes, R. W. A.; Ohba, R.; Ueda, H.; Bisselink, B.; Bauer, T.
2006-11-01
On continental to regional scales feedbacks between landuse and landcover change and climate have been widely documented over the past 10-15 years. In the present study we explore the possibility that also vegetation changes over much smaller areas may affect local precipitation regimes. Large scale (˜ 10 5 ha) irrigated plantations in semi-arid environments under particular conditions may affect local circulations and induce additional rainfall. Capturing this rainfall 'surplus' could then reduce the need for external irrigation sources and eventually lead to self-sustained water cycling. This concept is studied in the coastal plains in South West Saudi Arabia where the mountains of the Asir region exhibit the highest rainfall of the peninsula due to orographic lifting and condensation of moisture imported with the Indian Ocean monsoon and with disturbances from the Mediterranean Sea. We use a regional atmospheric modeling system (RAMS) forced by ECMWF analysis data to resolve the effect of complex surface conditions in high resolution (Δ x = 4 km). After validation, these simulations are analysed with a focus on the role of local processes (sea breezes, orographic lifting and the formation of fog in the coastal mountains) in generating rainfall, and on how these will be affected by large scale irrigated plantations in the coastal desert. The validation showed that the model simulates the regional and local weather reasonably well. The simulations exhibit a slightly larger diurnal temperature range than those captured by the observations, but seem to capture daily sea-breeze phenomena well. Monthly rainfall is well reproduced at coarse resolutions, but appears more localized at high resolutions. The hypothetical irrigated plantation (3.25 10 5 ha) has significant effects on atmospheric moisture, but due to weakened sea breezes this leads to limited increases of rainfall. In terms of recycling of irrigation gifts the rainfall enhancement in this particular setting is rather insignificant.
NASA Astrophysics Data System (ADS)
Totz, Sonja; Eliseev, Alexey V.; Petri, Stefan; Flechsig, Michael; Caesar, Levke; Petoukhov, Vladimir; Coumou, Dim
2018-02-01
We present and validate a set of equations for representing the atmosphere's large-scale general circulation in an Earth system model of intermediate complexity (EMIC). These dynamical equations have been implemented in Aeolus 1.0, which is a statistical-dynamical atmosphere model (SDAM) and includes radiative transfer and cloud modules (Coumou et al., 2011; Eliseev et al., 2013). The statistical dynamical approach is computationally efficient and thus enables us to perform climate simulations at multimillennia timescales, which is a prime aim of our model development. Further, this computational efficiency enables us to scan large and high-dimensional parameter space to tune the model parameters, e.g., for sensitivity studies.Here, we present novel equations for the large-scale zonal-mean wind as well as those for planetary waves. Together with synoptic parameterization (as presented by Coumou et al., 2011), these form the mathematical description of the dynamical core of Aeolus 1.0.We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983-2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction. We test the model's performance in reproducing the seasonal cycle and the influence of the El Niño-Southern Oscillation (ENSO). We use a simulated annealing optimization algorithm, which approximates the global minimum of a high-dimensional function.With non-tuned parameter values, the model performs reasonably in terms of its representation of zonal-mean circulation, planetary waves and storm tracks. The simulated annealing optimization improves in particular the model's representation of the Northern Hemisphere jet stream and storm tracks as well as the Hadley circulation.The regions of high azonal wind velocities (planetary waves) are accurately captured for all validation experiments. The zonal-mean zonal wind and the integrated lower troposphere mass flux show good results in particular in the Northern Hemisphere. In the Southern Hemisphere, the model tends to produce too-weak zonal-mean zonal winds and a too-narrow Hadley circulation. We discuss possible reasons for these model biases as well as planned future model improvements and applications.
Validating a strategy for psychosocial phenotyping using a large corpus of clinical text.
Gundlapalli, Adi V; Redd, Andrew; Carter, Marjorie; Divita, Guy; Shen, Shuying; Palmer, Miland; Samore, Matthew H
2013-12-01
To develop algorithms to improve efficiency of patient phenotyping using natural language processing (NLP) on text data. Of a large number of note titles available in our database, we sought to determine those with highest yield and precision for psychosocial concepts. From a database of over 1 billion documents from US Department of Veterans Affairs medical facilities, a random sample of 1500 documents from each of 218 enterprise note titles were chosen. Psychosocial concepts were extracted using a UIMA-AS-based NLP pipeline (v3NLP), using a lexicon of relevant concepts with negation and template format annotators. Human reviewers evaluated a subset of documents for false positives and sensitivity. High-yield documents were identified by hit rate and precision. Reasons for false positivity were characterized. A total of 58 707 psychosocial concepts were identified from 316 355 documents for an overall hit rate of 0.2 concepts per document (median 0.1, range 1.6-0). Of 6031 concepts reviewed from a high-yield set of note titles, the overall precision for all concept categories was 80%, with variability among note titles and concept categories. Reasons for false positivity included templating, negation, context, and alternate meaning of words. The sensitivity of the NLP system was noted to be 49% (95% CI 43% to 55%). Phenotyping using NLP need not involve the entire document corpus. Our methods offer a generalizable strategy for scaling NLP pipelines to large free text corpora with complex linguistic annotations in attempts to identify patients of a certain phenotype.
Validating a strategy for psychosocial phenotyping using a large corpus of clinical text
Gundlapalli, Adi V; Redd, Andrew; Carter, Marjorie; Divita, Guy; Shen, Shuying; Palmer, Miland; Samore, Matthew H
2013-01-01
Objective To develop algorithms to improve efficiency of patient phenotyping using natural language processing (NLP) on text data. Of a large number of note titles available in our database, we sought to determine those with highest yield and precision for psychosocial concepts. Materials and methods From a database of over 1 billion documents from US Department of Veterans Affairs medical facilities, a random sample of 1500 documents from each of 218 enterprise note titles were chosen. Psychosocial concepts were extracted using a UIMA-AS-based NLP pipeline (v3NLP), using a lexicon of relevant concepts with negation and template format annotators. Human reviewers evaluated a subset of documents for false positives and sensitivity. High-yield documents were identified by hit rate and precision. Reasons for false positivity were characterized. Results A total of 58 707 psychosocial concepts were identified from 316 355 documents for an overall hit rate of 0.2 concepts per document (median 0.1, range 1.6–0). Of 6031 concepts reviewed from a high-yield set of note titles, the overall precision for all concept categories was 80%, with variability among note titles and concept categories. Reasons for false positivity included templating, negation, context, and alternate meaning of words. The sensitivity of the NLP system was noted to be 49% (95% CI 43% to 55%). Conclusions Phenotyping using NLP need not involve the entire document corpus. Our methods offer a generalizable strategy for scaling NLP pipelines to large free text corpora with complex linguistic annotations in attempts to identify patients of a certain phenotype. PMID:24169276
Planetary Structures And Simulations Of Large-scale Impacts On Mars
NASA Astrophysics Data System (ADS)
Swift, Damian; El-Dasher, B.
2009-09-01
The impact of large meteroids is a possible cause for isolated orogeny on bodies devoid of tectonic activity. On Mars, there is a significant, but not perfect, correlation between large, isolated volcanoes and antipodal impact craters. On Mercury and the Moon, brecciated terrain and other unusual surface features can be found at the antipodes of large impact sites. On Earth, there is a moderate correlation between long-lived mantle hotspots at opposite sides of the planet, with meteoroid impact suggested as a possible cause. If induced by impacts, the mechanisms of orogeny and volcanism thus appear to vary between these bodies, presumably because of differences in internal structure. Continuum mechanics (hydrocode) simulations have been used to investigate the response of planetary bodies to impacts, requiring assumptions about the structure of the body: its composition and temperature profile, and the constitutive properties (equation of state, strength, viscosity) of the components. We are able to predict theoretically and test experimentally the constitutive properties of matter under planetary conditions, with reasonable accuracy. To provide a reference series of simulations, we have constructed self-consistent planetary structures using simplified compositions (Fe core and basalt-like mantle), which turn out to agree surprisingly well with the moments of inertia. We have performed simulations of large-scale impacts, studying the transmission of energy to the antipodes. For Mars, significant antipodal heating to depths of a few tens of kilometers was predicted from compression waves transmitted through the mantle. Such heating is a mechanism for volcanism on Mars, possibly in conjunction with crustal cracking induced by surface waves. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Results of the Minnesota Multiphasic Personality Inventory-2 among gestational surrogacy candidates.
Klock, Susan C; Covington, Sharon N
2015-09-01
To obtain normative data on the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) personality test for gestational surrogate (GS) candidates. A retrospective study was undertaken through chart review of all GS candidates assessed at Shady Grove Fertility Center, Rockville, MD, USA, between June 2007 and December 2009. Participants completed the MMPI-2 test during screening. MMPI-2 scores, demographic information, and screening outcome were retrieved. Among 153 included candidates, 132 (86.3%) were accepted to be a GS, 6 (3.9%) were ruled out because of medical reasons, and 15 (9.8%) were ruled out because of psychological reasons. The mean scores on each of the MMPI-2 scales were within the normal range. A score of more than 65 (the clinical cutoff) was recorded on the L scale for 46 (30.1%) candidates, on the K scale for 61 (39.9%), and on the S scale for 84 (54.9%). Women who were ruled out for psychological reasons had significantly higher mean scores on the validity scales F and L, and on clinical scale 8 than did women who were accepted (P<0.05 for all). Most GS candidates are well adjusted and free of psychopathology, but candidates tend to present themselves in an overly positive way. Copyright © 2015 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.
Modeling sediment transport as a spatio-temporal Markov process.
NASA Astrophysics Data System (ADS)
Heyman, Joris; Ancey, Christophe
2014-05-01
Despite a century of research about sediment transport by bedload occuring in rivers, its constitutive laws remain largely unknown. The proof being that our ability to predict mid-to-long term transported volumes within reasonable confidence interval is almost null. The intrinsic fluctuating nature of bedload transport may be one of the most important reasons why classical approaches fail. Microscopic probabilistic framework has the advantage of taking into account these fluctuations at the particle scale, to understand their effect on the macroscopic variables such as sediment flux. In this framework, bedload transport is seen as the random motion of particles (sand, gravel, pebbles...) over a two-dimensional surface (the river bed). The number of particles in motion, as well as their velocities, are random variables. In this talk, we show how a simple birth-death Markov model governing particle motion on a regular lattice accurately reproduces the spatio-temporal correlations observed at the macroscopic level. Entrainment, deposition and transport of particles by the turbulent fluid (air or water) are supposed to be independent and memoryless processes that modify the number of particles in motion. By means of the Poisson representation, we obtained a Fokker-Planck equation that is exactly equivalent to the master equation and thus valid for all cell sizes. The analysis shows that the number of moving particles evolves locally far from thermodynamic equilibrium. Several analytical results are presented and compared to experimental data. The index of dispersion (or variance over mean ratio) is proved to grow from unity at small scales to larger values at larger scales confirming the non Poisonnian behavior of bedload transport. Also, we study the one and two dimensional K-function, which gives the average number of moving particles located in a ball centered at a particle centroid function of the ball's radius.
Tresadern, Gary; Agrafiotis, Dimitris K
2009-12-01
Stochastic proximity embedding (SPE) and self-organizing superimposition (SOS) are two recently introduced methods for conformational sampling that have shown great promise in several application domains. Our previous validation studies aimed at exploring the limits of these methods and have involved rather exhaustive conformational searches producing a large number of conformations. However, from a practical point of view, such searches have become the exception rather than the norm. The increasing popularity of virtual screening has created a need for 3D conformational search methods that produce meaningful answers in a relatively short period of time and work effectively on a large scale. In this work, we examine the performance of these algorithms and the effects of different parameter settings at varying levels of sampling. Our goal is to identify search protocols that can produce a diverse set of chemically sensible conformations and have a reasonable probability of sampling biologically active space within a small number of trials. Our results suggest that both SPE and SOS are extremely competitive in this regard and produce very satisfactory results with as few as 500 conformations per molecule. The results improve even further when the raw conformations are minimized with a molecular mechanics force field to remove minor imperfections and any residual strain. These findings provide additional evidence that these methods are suitable for many everyday modeling tasks, both high- and low-throughput.
Air fluorescence detection of large air showers below the horizon
NASA Technical Reports Server (NTRS)
Halverson, P.; Bowen, T.
1985-01-01
In the interest of exploring the cosmic ray spectrum at energies greater than 10 to the 18th power eV, where flux rates at the Earth's surface drop below 100 yr(-1) km(-2) sr(-1), cosmic ray physicists have been forced to construct ever larger detectors in order to collect useful amounts of data in reasonable lengths of time. At present, the ultimate example of this trend is the Fly's Eye system in Utah, which uses the atmosphere around an array of skyward-looking photomultiplier tubes. The air acts as a scintillator to give detecting areas as large as 5000 square kilometers sr (for highest energy events). This experiment has revealed structure (and a possible cutoff) in the ultra-high energy region above 10 o the 19th power eV. The success of the Fly's Eye experiment provides impetus for continuing the development of larger detectors to make accessible even higher energies. However, due to the rapidly falling flux, a tenfold increase in observable energy would call for a hundredfold increase in the detecting area. But, the cost of expanding the Fly's Eye detecting area will approximately scale linearly with area. It is for these reasons that the authors have proposed a new approach to using the atmosphere as a scintillator; one which will require fewer photomultipliers, less hardware (thus being less extensive), yet will provide position and shower size information.
Rogers, C E; Carini, J L; Pechkis, J A; Gould, P L
2010-01-18
We utilize various techniques to characterize the residual phase modulation of a waveguide-based Mach-Zehnder electro-optical intensity modulator. A heterodyne technique is used to directly measure the phase change due to a given change in intensity, thereby determining the chirp parameter of the device. This chirp parameter is also measured by examining the ratio of sidebands for sinusoidal amplitude modulation. Finally, the frequency chirp caused by an intensity pulse on the nanosecond time scale is measured via the heterodyne signal. We show that this chirp can be largely compensated with a separate phase modulator. The various measurements of the chirp parameter are in reasonable agreement.
Disk Dispersal: Theoretical Understanding and Observational Constraints
NASA Astrophysics Data System (ADS)
Gorti, U.; Liseau, R.; Sándor, Z.; Clarke, C.
2016-12-01
Protoplanetary disks dissipate rapidly after the central star forms, on time-scales comparable to those inferred for planet formation. In order to allow the formation of planets, disks must survive the dispersive effects of UV and X-ray photoevaporation for at least a few Myr. Viscous accretion depletes significant amounts of the mass in gas and solids, while photoevaporative flows driven by internal and external irradiation remove most of the gas. A reasonably large fraction of the mass in solids and some gas get incorporated into planets. Here, we review our current understanding of disk evolution and dispersal, and discuss how these might affect planet formation. We also discuss existing observational constraints on dispersal mechanisms and future directions.
NASA Astrophysics Data System (ADS)
Zaidel'Man, F. R.
2009-01-01
The adverse human-induced changes in the water regime of soils leading to their degradation are considered. Factors of the human activity related to the water industry, agriculture, and silviculture are shown to play the most active role in the soil degradation. Among them are the large-scale hydraulic works on rivers, drainage and irrigation of soils, ameliorative and agricultural impacts, road construction, and uncontrolled impacts of industry and silviculture on the environment. The reasons for each case of soil degradation related to changes in the soil water regime are considered, and preventive measures are proposed. The role of secondary soil degradation processes is shown.
Genomic Databases and Biobanks in Israel.
Siegal, Gil
2015-01-01
Large-scale biobanks represents an important scientific and medical as well as a commercial opportunity. However, realizing these and other prospects requires social, legal, and regulatory conducive climate, as well as a capable scientific community and adequate infrastructure. Israel has been grappling with the appropriate approach to establishing such a repository, and debates over the governance, structure, finance, and mode of operation shed a bright light on the underlying social norms, civic engagement and scientific clout in steering a governmental response to pressing medical needs. The article presents the backdrop of the Israeli scene, and explores the reasons and forces at work behind the current formulation of the Israeli National Biobank, MIDGAM. © 2015 American Society of Law, Medicine & Ethics, Inc.
Solar Sail Loads, Dynamics, and Membrane Studies
NASA Technical Reports Server (NTRS)
Slade, K. N.; Belvin, W. K.; Behun, V.
2002-01-01
While a number of solar sail missions have been proposed recently, these missions have not been selected for flight validation. Although the reasons for non-selection are varied, principal among them is the lack of subsystem integration and ground testing. This paper presents some early results from a large-scale ground testing program for integrated solar sail systems. In this series of tests, a 10 meter solar sail tested is subjected to dynamic excitation both in ambient atmospheric and vacuum conditions. Laser vibrometry is used to determine resonant frequencies and deformation shapes. The results include some low-order sail modes which only can be seen in vacuum, pointing to the necessity of testing in that environment.
NASA Astrophysics Data System (ADS)
James, L. Allan; Phillips, Jonathan D.; Lecce, Scott A.
2017-10-01
This special issue celebrates the centennial of the publication of G.K. Gilbert's (1917) monograph, Hydraulic-Mining Débris in the Sierra Nevada, U.S. Geological Survey Professional Paper 105 (PP105). Reasons to celebrate PP105 are manifold. It was the last of four classic monographs that Gilbert wrote in a career that spanned five decades. The monograph, PP105, introduced several important concepts and provided an integrated view of watersheds that was uncommon in its day. It also provided an extreme, lucid example of anthropogenic changes and legacy sediment and how to approach such large-scale phenomena from an objective, quantitative basis.
Application of computational aero-acoustics to real world problems
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.
Optimization of a Monte Carlo Model of the Transient Reactor Test Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kristin; DeHart, Mark; Goluoglu, Sedat
2017-03-01
The ultimate goal of modeling and simulation is to obtain reasonable answers to problems that don’t have representations which can be easily evaluated while minimizing the amount of computational resources. With the advances during the last twenty years of large scale computing centers, researchers have had the ability to create a multitude of tools to minimize the number of approximations necessary when modeling a system. The tremendous power of these centers requires the user to possess an immense amount of knowledge to optimize the models for accuracy and efficiency.This paper seeks to evaluate the KENO model of TREAT to optimizemore » calculational efforts.« less
NASA Astrophysics Data System (ADS)
Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; Woolnough, Steve J.; Jiang, Xianan; Waliser, Duane E.; Caian, Mihaela; Cole, Jason; Hagos, Samson M.; Hannay, Cecile; Kim, Daehyun; Miyakawa, Tomoki; Pritchard, Michael S.; Roehrig, Romain; Shindo, Eiki; Vitart, Frederic; Wang, Hailan
2015-05-01
An analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.
Global Distribution of Density Irregularities in the Equatorial Ionosphere
NASA Technical Reports Server (NTRS)
Kil, Hyosub; Heelis, R. A.
1998-01-01
We analyzed measurements of ion number density made by the retarding potential analyzer aboard the Atmosphere Explorer-E (AE-E) satellite, which was in an approximately circular orbit at an altitude near 300 km in 1977 and later at an altitude near 400 km. Large-scale (greater than 60 km) density measurements in the high-altitude regions show large depletions of bubble-like structures which are confined to narrow local time longitude, and magnetic latitude ranges, while those in the low-altitude regions show relatively small depletions which are broadly distributed,in space. For this reason we considered the altitude regions below 300 km and above 350 km and investigated the global distribution of irregularities using the rms deviation delta N/N over a path length of 18 km as an indicator of overall irregularity intensity. Seasonal variations of irregularity occurrence probability are significant in the Pacific regions, while the occurrence probability is always high in die Atlantic-African regions and is always low in die Indian regions. We find that the high occurrence probability in the Pacific regions is associated with isolated bubble structures, while that near 0 deg longitude is produced by large depictions with bubble structures which are superimposed on a large-scale wave-like background. Considerations of longitude variations due to seeding mechanisms and due to F region winds and drifts are necessary to adequately explain the observations at low and high altitudes. Seeding effects are most obvious near 0 deg longitude, while the most easily observed effect of the F region is the suppression of irregularity growth by interhemispheric neutral winds.
Photochemical free radical production rates in the eastern Caribbean
NASA Astrophysics Data System (ADS)
Dister, Brian; Zafiriou, Oliver C.
1993-02-01
Potential photochemical production rates of total (NO-scavengeable) free radicals were surveyed underway (> 900 points) in the eastern Caribbean and Orinoco delta in spring and fall 1988. These data document seasonal trends and large-scale (˜ 10-1000 km) variability in the pools of sunlight-generated reactive transients, which probably mediate a major portion of marine photoredox transformations. Radical production potential was detectable in all waters and was reasonably quantifiable at rates above 0.25 nmol L-1 min-1 sun-1. Radical production rates varied from ˜ 0.1-0.5 nmol L-1 min-1 of full-sun illumination in "blue water" to > 60 nmol L-1 min-1 in some estuarine waters in the high-flow season. Qualitatively, spatiotemporal potential rate distributions strikingly resembled that of "chlorophyll" (a riverine-influence tracer of uncertain specificity) in 1979-1981 CZCS images of the region [Müller-Karger et al., 1988] at all scales. Basin-scale occurrence of greatly enhanced rates in fall compared to spring is attributed to terrestrial chromophore inputs, primarily from the Orinoco River, any contributions from Amazon water and nutrient-stimulus effects could not be resolved. A major part of the functionally photoreactive colored organic matter (COM) involved in radical formation clearly mixes without massive loss out into high-salinity waters, although humic acids may flocculate in estuaries. A similar conclusion applies over smaller scales for COM as measured optically [Blough et al., this issue]. Furthermore, optical absorption and radical production rates were positively correlated in the estuarine region in fall. These cruises demonstrated that photochemical techniques are now adequate to treat terrestrial photochemical chromophore inputs as an estuarine mixing problem on a large scale, though the ancillary data base does not currently support such an analysis in this region. Eastern Caribbean waters are not markedly more reactive at comparable salinities than waters of the Gulf of Maine and North Atlantic Bight, despite large inputs of colored waters from two large tropical rivers with substantial "black water" tributaries. Other sources of reactive COM, such as grazing, sedimentary diagenesis, and "marine humus" may increase temperate waters' photoreactivity; alternatively, northern waters may be chromophore-rich because they are light-poor and photobleaching is a major sink of photoreactive COM.
Louys, Julien; Corlett, Richard T; Price, Gilbert J; Hawkins, Stuart; Piper, Philip J
2014-01-01
Alarm over the prospects for survival of species in a rapidly changing world has encouraged discussion of translocation conservation strategies that move beyond the focus of ‘at-risk’ species. These approaches consider larger spatial and temporal scales than customary, with the aim of recreating functioning ecosystems through a combination of large-scale ecological restoration and species introductions. The term ‘rewilding’ has come to apply to this large-scale ecosystem restoration program. While reintroductions of species within their historical ranges have become standard conservation tools, introductions within known paleontological ranges—but outside historical ranges—are more controversial, as is the use of taxon substitutions for extinct species. Here, we consider possible conservation translocations for nine large-bodied taxa in tropical Asia-Pacific. We consider the entire spectrum of conservation translocation strategies as defined by the IUCN in addition to rewilding. The taxa considered are spread across diverse taxonomic and ecological spectra and all are listed as ‘endangered’ or ‘critically endangered’ by the IUCN in our region of study. They all have a written and fossil record that is sufficient to assess past changes in range, as well as ecological and environmental preferences, and the reasons for their decline, and they have all suffered massive range restrictions since the late Pleistocene. General principles, problems, and benefits of translocation strategies are reviewed as case studies. These allowed us to develop a conservation translocation matrix, with taxa scored for risk, benefit, and feasibility. Comparisons between taxa across this matrix indicated that orangutans, tapirs, Tasmanian devils, and perhaps tortoises are the most viable taxa for translocations. However, overall the case studies revealed a need for more data and research for all taxa, and their ecological and environmental needs. Rewilding the Asian-Pacific tropics remains a controversial conservation strategy, and would be difficult in what is largely a highly fragmented area geographically. PMID:25540698
A Multi-Stage Method for Connecting Participatory Sensing and Noise Simulations
Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui
2015-01-01
Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for noise simulations on multiple spatio-temporal scales. PMID:25621604
A multi-stage method for connecting participatory sensing and noise simulations.
Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui
2015-01-22
Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for noise simulations on multiple spatio-temporal scales.
NASA Astrophysics Data System (ADS)
Creusen, I. M.; Hazelhoff, L.; De With, P. H. N.
2013-10-01
In large-scale automatic traffic sign surveying systems, the primary computational effort is concentrated at the traffic sign detection stage. This paper focuses on reducing the computational load of particularly the sliding window object detection algorithm which is employed for traffic sign detection. Sliding-window object detectors often use a linear SVM to classify the features in a window. In this case, the classification can be seen as a convolution of the feature maps with the SVM kernel. It is well known that convolution can be efficiently implemented in the frequency domain, for kernels larger than a certain size. We show that by careful reordering of sliding-window operations, most of the frequency-domain transformations can be eliminated, leading to a substantial increase in efficiency. Additionally, we suggest to use the overlap-add method to keep the memory use within reasonable bounds. This allows us to keep all the transformed kernels in memory, thereby eliminating even more domain transformations, and allows all scales in a multiscale pyramid to be processed using the same set of transformed kernels. For a typical sliding-window implementation, we have found that the detector execution performance improves with a factor of 5.3. As a bonus, many of the detector improvements from literature, e.g. chi-squared kernel approximations, sub-class splitting algorithms etc., can be more easily applied at a lower performance penalty because of an improved scalability.
Scaled effective on-site Coulomb interaction in the DFT+U method for correlated materials
NASA Astrophysics Data System (ADS)
Nawa, Kenji; Akiyama, Toru; Ito, Tomonori; Nakamura, Kohji; Oguchi, Tamio; Weinert, M.
2018-01-01
The first-principles calculation of correlated materials within density functional theory remains challenging, but the inclusion of a Hubbard-type effective on-site Coulomb term (Ueff) often provides a computationally tractable and physically reasonable approach. However, the reported values of Ueff vary widely, even for the same ionic state and the same material. Since the final physical results can depend critically on the choice of parameter and the computational details, there is a need to have a consistent procedure to choose an appropriate one. We revisit this issue from constraint density functional theory, using the full-potential linearized augmented plane wave method. The calculated Ueff parameters for the prototypical transition-metal monoxides—MnO, FeO, CoO, and NiO—are found to depend significantly on the muffin-tin radius RMT, with variations of more than 2-3 eV as RMT changes from 2.0 to 2.7 aB. Despite this large variation in Ueff, the calculated valence bands differ only slightly. Moreover, we find an approximately linear relationship between Ueff(RMT) and the number of occupied localized electrons within the sphere, and give a simple scaling argument for Ueff; these results provide a rationalization for the large variation in reported values. Although our results imply that Ueff values are not directly transferable among different calculation methods (or even the same one with different input parameters such as RMT), use of this scaling relationship should help simplify the choice of Ueff.
Tropical Oceanic Precipitation Processes Over Warm Pool: 2D and 3D Cloud Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Johnson, D.; Simpson, J.; Einaudi, Franco (Technical Monitor)
2001-01-01
Rainfall is a key link in the hydrologic cycle as well as the primary heat source for the atmosphere. The vertical distribution of convective latent-heat release modulates the large-scale circulations of the topics. Furthermore, changes in the moisture distribution at middle and upper levels of the troposphere can affect cloud distributions and cloud liquid water and ice contents. How the incoming solar and outgoing longwave radiation respond to these changes in clouds is a major factor in assessing climate change. Present large-scale weather and climate model simulate processes only crudely, reducing confidence in their predictions on both global and regional scales. One of the most promising methods to test physical parameterizations used in General Circulation Models (GCMs) and climate models is to use field observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and physically realistic parameterizations of cloud microphysical processes, and allow for their complex interactions with solar and infrared radiative transfer processes. The CRMs can reasonably well resolve the evolution, structure, and life cycles of individual clouds and clouds systems. The major objective of this paper is to investigate the latent heating, moisture and momentum budgets associated with several convective systems developed during the TOGA COARE IFA - westerly wind burst event (late December, 1992). The tool for this study is the Goddard Cumulus Ensemble (GCE) model which includes a 3-class ice-phase microphysics scheme.
DOT National Transportation Integrated Search
2012-07-01
With the use of supplementary cementing materials (SCMs) in concrete mixtures, salt scaling tests such as ASTM C672 have been found to be overly aggressive and do correlate well with field scaling performance. The reasons for this are thought to be b...
78 FR 10210 - Utility Scale Wind Towers From China and Vietnam
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-13
...)] Utility Scale Wind Towers From China and Vietnam Determinations On the basis of the record \\1\\ developed... with material injury by reason of imports of utility scale wind towers from China and Vietnam, provided... of imports of utility scale wind towers from China and Vietnam. Commissioner Dean A. Pinkert...
Bonanad, S; De la Rubia, J; Gironella, M; Pérez Persona, E; González, B; Fernández Lago, C; Arnan, M; Zudaire, M; Hernández Rivas, J A; Soler, A; Marrero, C; Olivier, C; Altés, A; Valcárcel, D; Hernández, M T; Oiartzabal, I; Fernández Ordoño, R; Arnao, M; Esquerra, A; Sarrá, J; González-Barca, E; González, J; Calvo, X; Nomdedeu, M; García Guiñón, A; Ramírez Payer, A; Casado, A; López, S; Durán, M; Marcos, M; Cruz-Jentoft, A J
2015-09-01
The purpose of this study was to develop a new brief, comprehensive geriatric assessment scale for older patients diagnosed with different hematological malignancies, the Geriatric Assessment in Hematology (GAH scale), and to determine its psychometric properties. The 30-item GAH scale was designed through a multi-step process to cover 8 relevant dimensions. This is an observational study conducted in 363 patients aged≥65years, newly diagnosed with different hematological malignancies (myelodysplasic syndrome/acute myeloblastic leukemia, multiple myeloma, or chronic lymphocytic leukemia), and treatment-naïve. The scale psychometric validation process included the analyses of feasibility, floor and ceiling effect, validity and reliability criteria. Mean time taken to complete the GAH scale was 11.9±4.7min that improved through a learning-curve effect. Almost 90% of patients completed all items, and no floor or ceiling effects were identified. Criterion validity was supported by reasonable correlations between the GAH scale dimensions and three contrast variables (global health visual analogue scale, ECOG and Karnofsky), except for comorbidities. Factor analysis (supported by the scree plot) revealed nine factors that explained almost 60% of the total variance. Moderate internal consistency reliability was found (Cronbach's α: 0.610), and test-retest was excellent (ICC coefficients, 0.695-0.928). Our study suggests that the GAH scale is a valid, internally reliable and a consistent tool to assess health status in older patients with different hematological malignancies. Future large studies should confirm whether the GAH scale may be a tool to improve clinical decision-making in older patients with hematological malignancies. Copyright © 2015 Elsevier Inc. All rights reserved.
A south equatorial African precipitation dipole and the associated atmospheric circulation
NASA Astrophysics Data System (ADS)
Dezfuli, A. K.; Zaitchik, B.; Gnanadesikan, A.
2013-12-01
South Equatorial Africa (SEA) is a climatically diverse region that includes a dramatic topographic and vegetation contrast between the lowland, humid Congo basin to the west and the East African Plateau to the east. Due to lack of conventional weather data and a tendency for researchers to treat East and western Africa as separate regions, dynamics of the atmospheric water cycle across SEA have received relatively little attention, particularly at subseasonal timescales. Both western and eastern sectors of SEA are affected by large-scale drivers of the water cycle associated with Atlantic variability (western sector), Indian Ocean variability (eastern sector) and Pacific variability (both sectors). However, a specific characteristic of SEA is strong heterogeneity in interannual rainfall variability that cannot be explained by large-scale climatic phenomena. For this reason, this study examines regional climate dynamics on daily time-scale with a focus on the role that the abrupt topographic contrast between the lowland Congo and the East African highlands plays in driving rainfall behavior on short timescales. Analysis of daily precipitation data during November-March reveals a zonally-oriented dipole mode over SEA that explains the leading pattern of weather-scale precipitation variability in the region. The separating longitude of the two poles is coincident with the zonal variation of topography. An anomalous counter-clockwise atmospheric circulation associated with the dipole mode appears over the entire SEA. The circulation is triggered by its low-level westerly component, which is in turn generated by an interhemispheric pressure gradient. These enhanced westerlies hit the East African highlands and produce topographically-driven low-level convergence and convection that further intensifies the circulation. Recent studies have shown that under climate change the position and intensity of subtropical highs in both hemispheres and the intensity of precipitation over equatorial Africa are projected to change. Both of these trends have implications for the manner in which large-scale dynamics will interact with regional topography, affecting the intensity and frequency of the dipole mode characterized in this study and the occurrence of extreme wet and dry spells in the region.
Factors Influencing Pharmacy Students' Attendance Decisions in Large Lectures
Helms, Kristen L.; McDonough, Sharon K.; Breland, Michelle L.
2009-01-01
Objectives To identify reasons for pharmacy student attendance and absenteeism in large lectures and to determine whether certain student characteristics affect student absenteeism. Methods Pharmacy students' reasons to attend and not attend 3 large lecture courses were identified. Using a Web-based survey instrument, second-year pharmacy students were asked to rate to what degree various reasons affected their decision to attend or not attend classes for 3 courses. Bivariate analyses were used to assess the relationships between student characteristics and degree of absenteeism. Results Ninety-eight students (75%) completed the survey instrument. The degree of student absenteeism differed among the 3 courses. Most student demographic characteristics examined were not related to the degree of absenteeism. Different reasons to attend and not to attend class were identified for each of the 3 courses, suggesting that attendance decisions were complex. Conclusions Respondents wanted to take their own notes and the instructor highlighted what was important to know were the top 2 common reasons for pharmacy students to attend classes. Better understanding of factors influencing student absenteeism may help pharmacy educators design effective interventions to facilitate student attendance. PMID:19777098
NASA Astrophysics Data System (ADS)
Wild, B.; Keuper, F.; Kummu, M.; Beer, C.; Blume-Werry, G.; Fontaine, S.; Gavazov, K.; Gentsch, N.; Guggenberger, G.; Hugelius, G.; Jalava, M.; Koven, C.; Krab, E. J.; Kuhry, P.; Monteux, S.; Richter, A.; Shazhad, T.; Dorrepaal, E.
2017-12-01
Predictions of soil organic carbon (SOC) losses in the northern circumpolar permafrost area converge around 15% (± 3% standard error) of the initial C pool by 2100 under the RCP 8.5 warming scenario. Yet, none of these estimates consider plant-soil interactions such as the rhizosphere priming effect (RPE). While laboratory experiments have shown that the input of plant-derived compounds can stimulate SOC losses by up to 1200%, the magnitude of RPE in natural ecosystems is unknown and no methods for upscaling exist so far. We here present the first spatial and depth explicit RPE model that allows estimates of RPE on a large scale (PrimeSCale). We combine available spatial data (SOC, C/N, GPP, ALT and ecosystem type) and new ecological insights to assess the importance of the RPE at the circumpolar scale. We use a positive saturating relationship between the RPE and belowground C allocation and two ALT-dependent rooting-depth distribution functions (for tundra and boreal forest) to proportionally assign belowground C allocation and RPE to individual soil depth increments. The model permits to take into account reasonable limiting factors on additional SOC losses by RPE including interactions between spatial and/or depth variation in GPP, plant root density, SOC stocks and ALT. We estimate potential RPE-induced SOC losses at 9.7 Pg C (5 - 95% CI: 1.5 - 23.2 Pg C) by 2100 (RCP 8.5). This corresponds to an increase of the current permafrost SOC-loss estimate from 15% of the initial C pool to about 16%. If we apply an additional molar C/N threshold of 20 to account for microbial C limitation as a requirement for the RPE, SOC losses by RPE are further reduced to 6.5 Pg C (5 - 95% CI: 1.0 - 16.8 Pg C) by 2100 (RCP 8.5). Although our results show that current estimates of permafrost soil C losses are robust without taking into account the RPE, our model also highlights high-RPE risk in Siberian lowland areas and Alaska north of the Brooks Range. The small overall impact of the RPE is largely explained by the interaction between belowground plant C allocation and SOC depth distribution. Our findings thus highlight the importance of fine scale interactions between plant and soil properties for large scale carbon fluxes and we provide a first model that bridges this gap and permits the quantification of RPE across a large area.
A Manual of Instruction for Log Scaling and the Measurement of Timber Products.
ERIC Educational Resources Information Center
Idaho State Board of Vocational Education, Boise. Div. of Trade and Industrial Education.
This manual was developed by a state advisory committee in Idaho to improve and standardize log scaling and provide a reference in training men for the job of log scaling in timber measurement. The content includes: (1) an introduction containing the scope of the manual, a definition and history of scaling, the reasons for scaling, and the…
Mediterranean Cyclones in a changing climate. First statistical results
NASA Astrophysics Data System (ADS)
Tous, M.; Genoves, A.; Campins, J.; Picornell, M. A.; Jansa, A.; Mizuta, R.
2009-09-01
The Mediterranean storms play an important role in weather and climate. Their influence in determining the local weather is known; heavy precipitation systems and strong wind cases are often related to the presence of a cyclone in the Mediterranean. From a large-scale point of view, the Mediterranean storm track has importance in the vertical and horizontal transfers of heat and water vapour towards the Eastern regions. For all of these reasons, any future change related to the intensity, frequency or tracks of these storms can be important for both the local weather and local climate, at least, in the countries around the basin. The Mediterranean cyclones constitute a study subject of increasing interest. Some climatologies from long series of re-analyses, like ERA15, NCEP/NCAR and ERA40, or from operational and high resolution analysis systems, like HIRLAM_INM and ECMWF, have allowed to define the main characteristics of these storms. Generally speaking, the Mediterranean storms have the characteristics of extratropical storms, showing smaller sizes and shorter life cycles than those ones developed in other maritime areas of the world. Moreover, the influence of the land areas and high mountains around the basin and the large-scale heat releases have been revealed as key factors for understanding their genesis and rates of development. In spite of the fact that probably the existing automatic procedures include some large scale assumptions, which may not the best for the correct detection and tracking the Mediterranean storms, these procedures can provide a first and almost necessary step, from a statistical/climatological point of view, specially taking into account both the current resolution of the existent global re-analysis series and global climatic models and the state-of-the art about Mediterranean cyclones. A cyclone detection and tracking procedure, originally designed for the description of Mediterranean storms, has been applied to the low resolution (1.5 degrees lat-lon) outputs of the JMA-GSM climate general circulation model. Preliminary results are here presented. Two different periods have been analysed. The first period, covering 1979-2002 has been compared with the previously computed ERA-40 climatology of cyclones. Results agree reasonably well with those obtained from ERA-40, providing confidence to the current climate simulation of JMA-GSM. Once validated the model from the perspective of cyclonic climatology under current climate conditions, the same procedure is applied to a scenario period (2075-2099) to investigate possible changes in cyclonic activity linked to climate change.
ERIC Educational Resources Information Center
Pruett, John R., Jr.; Kandala, Sridhar; Petersen, Steven E.; Povinelli, Daniel J.
2015-01-01
Understanding the underpinnings of social responsiveness and theory of mind (ToM) will enhance our knowledge of autism spectrum disorder (ASD). We hypothesize that higher-order relational reasoning (higher-order RR: reasoning necessitating integration of relationships among multiple variables) is necessary but not sufficient for ToM, and that…
Apollo Rendezvous Docking Simulator
1964-11-02
Originally the Rendezvous was used by the astronauts preparing for Gemini missions. The Rendezvous Docking Simulator was then modified and used to develop docking techniques for the Apollo program. The pilot is shown maneuvering the LEM into position for docking with a full-scale Apollo Command Module. From A.W. Vogeley, Piloted Space-Flight Simulation at Langley Research Center, Paper presented at the American Society of Mechanical Engineers, 1966 Winter Meeting, New York, NY, November 27 - December 1, 1966. The Rendezvous Docking Simulator and also the Lunar Landing Research Facility are both rather large moving-base simulators. It should be noted, however, that neither was built primarily because of its motion characteristics. The main reason they were built was to provide a realistic visual scene. A secondary reason was that they would provide correct angular motion cues (important in control of vehicle short-period motions) even though the linear acceleration cues would be incorrect. Apollo Rendezvous Docking Simulator: Langley s Rendezvous Docking Simulator was developed by NASA scientists to study the complex task of docking the Lunar Excursion Module with the Command Module in Lunar orbit.
Perspective: Sloppiness and emergent theories in physics, biology, and beyond.
Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P
2015-07-07
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.
Neuropsychological profile in adult schizophrenia measured with the CMINDS.
van Erp, Theo G M; Preda, Adrian; Turner, Jessica A; Callahan, Shawn; Calhoun, Vince D; Bustillo, Juan R; Lim, Kelvin O; Mueller, Bryon; Brown, Gregory G; Vaidya, Jatin G; McEwen, Sarah; Belger, Aysenil; Voyvodic, James; Mathalon, Daniel H; Nguyen, Dana; Ford, Judith M; Potkin, Steven G
2015-12-30
Schizophrenia neurocognitive domain profiles are predominantly based on paper-and-pencil batteries. This study presents the first schizophrenia domain profile based on the Computerized Multiphasic Interactive Neurocognitive System (CMINDS(®)). Neurocognitive domain z-scores were computed from computerized neuropsychological tests, similar to those in the Measurement and Treatment Research to Improve Cognition in Schizophrenia Consensus Cognitive Battery (MCCB), administered to 175 patients with schizophrenia and 169 demographically similar healthy volunteers. The schizophrenia domain profile order by effect size was Speed of Processing (d=-1.14), Attention/Vigilance (d=-1.04), Working Memory (d=-1.03), Verbal Learning (d=-1.02), Visual Learning (d=-0.91), and Reasoning/Problem Solving (d=-0.67). There were no significant group by sex interactions, but overall women, compared to men, showed advantages on Attention/Vigilance, Verbal Learning, and Visual Learning compared to Reasoning/Problem Solving on which men showed an advantage over women. The CMINDS can readily be employed in the assessment of cognitive deficits in neuropsychiatric disorders; particularly in large-scale studies that may benefit most from electronic data capture. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Multi-scale modeling of CO2 dispersion leaked from seafloor off the Japanese coast.
Kano, Yuki; Sato, Toru; Kita, Jun; Hirabayashi, Shinichiro; Tabeta, Shigeru
2010-02-01
A numerical simulation was conducted to predict the change of pCO(2) in the ocean caused by CO(2) leaked from an underground aquifer, in which CO(2) is purposefully stored. The target space of the present model was the ocean above the seafloor. The behavior of CO(2) bubbles, their dissolution, and the advection-diffusion of dissolved CO(2) were numerically simulated. Here, two cases for the leakage rate were studied: an extreme case, 94,600 t/y, which assumed that a large fault accidentally connects the CO(2) reservoir and the seafloor; and a reasonable case, 3800 t/y, based on the seepage rate of an existing EOR site. In the extreme case, the calculated increase in DeltapCO(2) experienced by floating organisms was less than 300 ppm, while that for immobile organisms directly over the fault surface periodically exceeded 1000 ppm, if momentarily. In the reasonable case, the calculated DeltapCO(2) and pH were within the range of natural fluctuation. Copyright 2009 Elsevier Ltd. All rights reserved.
A Research Informed Approach to Teaching Cosmology to Our Society's Future Leaders
NASA Astrophysics Data System (ADS)
Prather, Edward
2012-03-01
We recently completed a large-scale, systematic study of general education introductory astronomy students' conceptual and reasoning difficulties related to cosmology. As part of this study, we analyzed a total of 4359 surveys (pre- and post-instruction) containing students' responses to questions about the Big Bang, the evolution and expansion of the universe, using Hubble plots to reason about the age and expansion rate of the universe, and using galaxy rotation curves to infer the presence of dark matter. We also designed, piloted, and validated a new suite of five cosmology Lecture-Tutorials. We found that students who use the new Lecture-Tutorials can achieve larger learning gains than their peers who did not. This material is based in part upon work supported by the National Science Foundation under Grant Nos. 0833364 and 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Can limited area NWP and/or RCM models improve on large scales inside their domain?
NASA Astrophysics Data System (ADS)
Mesinger, Fedor; Veljovic, Katarina
2017-04-01
In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales. Average rms wind difference at 250 hPa compared to ECMWF analyses was used as another verification measure. With 21 members run, at about the same resolution of the driver global and the nested Eta during the first 10 days of the experiment, both verification measures generally demonstrate advantage of the Eta, in particular during and after the time of a deep upper tropospheric trough crossing the Rockies at the first 2-6 days of the experiment. Rerunning the Eta ensemble switched to use sigma (Eta/sigma) showed this advantage of the Eta to come to a considerable degree, but not entirely, from its use of the eta coordinate. Compared to cumulative scores of the ensembles run, this is demonstrated to even a greater degree by the number of "wins" of one model vs. another. Thus, at 4.5 day time when the trough just about crossed the Rockies, all 21 Eta/eta members have better ETSa scores than their ECMWF driver members. Eta/sigma has 19 members improving upon ECMWF, but loses to Eta/eta by a score of as much as 20 to 1. ECMWF members do better with rms scores, losing to Eta/eta by 18 vs. 3, but winning over Eta/sigma by 12 to 9. Examples of wind plots behind these results are shown, and additional reasons possibly helping or not helping the results summarized are discussed.
Bin recycling strategy for improving the histogram precision on GPU
NASA Astrophysics Data System (ADS)
Cárdenas-Montes, Miguel; Rodríguez-Vázquez, Juan José; Vega-Rodríguez, Miguel A.
2016-07-01
Histogram is an easily comprehensible way to present data and analyses. In the current scientific context with access to large volumes of data, the processing time for building histogram has dramatically increased. For this reason, parallel construction is necessary to alleviate the impact of the processing time in the analysis activities. In this scenario, GPU computing is becoming widely used for reducing until affordable levels the processing time of histogram construction. Associated to the increment of the processing time, the implementations are stressed on the bin-count accuracy. Accuracy aspects due to the particularities of the implementations are not usually taken into consideration when building histogram with very large data sets. In this work, a bin recycling strategy to create an accuracy-aware implementation for building histogram on GPU is presented. In order to evaluate the approach, this strategy was applied to the computation of the three-point angular correlation function, which is a relevant function in Cosmology for the study of the Large Scale Structure of Universe. As a consequence of the study a high-accuracy implementation for histogram construction on GPU is proposed.