Ergül, Özgür
2011-11-01
Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.
ERIC Educational Resources Information Center
Töytäri, Aija; Piirainen, Arja; Tynjälä, Päivi; Vanhanen-Nuutinen, Liisa; Mäki, Kimmo; Ilves, Vesa
2016-01-01
In this large-scale study, higher education teachers' descriptions of their own learning were examined with qualitative analysis involving application of principles of phenomenographic research. This study is unique: it is unusual to use large-scale data in qualitative studies. The data were collected through an e-mail survey sent to 5960 teachers…
Large-scale linear programs in planning and prediction.
DOT National Transportation Integrated Search
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
SCALE(ing)-UP Teaching: A Case Study of Student Motivation in an Undergraduate Course
ERIC Educational Resources Information Center
Chittum, Jessica R.; McConnell, Kathryne Drezek; Sible, Jill
2017-01-01
Teaching large classes is increasingly common; thus, demand for effective large-class pedagogy is rising. One method, titled "SCALE-UP" (Student-Centered Active Learning Environment for Undergraduate Programs), is intended for large classes and involves collaborative, active learning in a technology-rich and student-centered environment.…
NASA Astrophysics Data System (ADS)
Chatterjee, Tanmoy; Peet, Yulia T.
2018-03-01
Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.
Space Industrialization: The Mirage of Abundance.
ERIC Educational Resources Information Center
Deudney, Daniel
1982-01-01
Large-scale space industrialization is not a viable solution to the population, energy, and resource problems of earth. The expense and technological difficulties involved in the development and maintenance of space manufacturing facilities, space colonies, and large-scale satellites for solar power are discussed. (AM)
Sawata, Hiroshi; Ueshima, Kenji; Tsutani, Kiichiro
2011-04-14
Clinical evidence is important for improving the treatment of patients by health care providers. In the study of cardiovascular diseases, large-scale clinical trials involving thousands of participants are required to evaluate the risks of cardiac events and/or death. The problems encountered in conducting the Japanese Acute Myocardial Infarction Prospective (JAMP) study highlighted the difficulties involved in obtaining the financial and infrastructural resources necessary for conducting large-scale clinical trials. The objectives of the current study were: 1) to clarify the current funding and infrastructural environment surrounding large-scale clinical trials in cardiovascular and metabolic diseases in Japan, and 2) to find ways to improve the environment surrounding clinical trials in Japan more generally. We examined clinical trials examining cardiovascular diseases that evaluated true endpoints and involved 300 or more participants using Pub-Med, Ichushi (by the Japan Medical Abstracts Society, a non-profit organization), websites of related medical societies, the University Hospital Medical Information Network (UMIN) Clinical Trials Registry, and clinicaltrials.gov at three points in time: 30 November, 2004, 25 February, 2007 and 25 July, 2009. We found a total of 152 trials that met our criteria for 'large-scale clinical trials' examining cardiovascular diseases in Japan. Of these, 72.4% were randomized controlled trials (RCTs). Of 152 trials, 9.2% of the trials examined more than 10,000 participants, and 42.8% examined between 1,000 and 10,000 participants. The number of large-scale clinical trials markedly increased from 2001 to 2004, but suddenly decreased in 2007, then began to increase again. Ischemic heart disease (39.5%) was the most common target disease. Most of the larger-scale trials were funded by private organizations such as pharmaceutical companies. The designs and results of 13 trials were not disclosed. To improve the quality of clinical trials, all sponsors should register trials and disclose the funding sources before the enrolment of participants, and publish their results after the completion of each study.
Firebrands and spotting ignition in large-scale fires
Eunmo Koo; Patrick J. Pagni; David R. Weise; John P. Woycheese
2010-01-01
Spotting ignition by lofted firebrands is a significant mechanism of fire spread, as observed in many largescale fires. The role of firebrands in fire propagation and the important parameters involved in spot fire development are studied. Historical large-scale fires, including wind-driven urban and wildland conflagrations and post-earthquake fires are given as...
Newton Methods for Large Scale Problems in Machine Learning
ERIC Educational Resources Information Center
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Language Learning Motivation in China: Results of a Large-Scale Stratified Survey
ERIC Educational Resources Information Center
You, Chenjing; Dörnyei, Zoltán
2016-01-01
This article reports on the findings of a large-scale cross-sectional survey of the motivational disposition of English language learners in secondary schools and universities in China. The total sample involved over 10,000 students and was stratified according to geographical region and teaching contexts, selecting participants both from urban…
SCALE PROBLEMS IN REPORTING LANDSCAPE PATTERN AT THE REGIONAL SCALE
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distributions of landscape indices illustrate problems associated with the g...
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Large-scale silviculture experiments of western Oregon and Washington.
Nathan J. Poage; Paul D. Anderson
2007-01-01
We review 12 large-scale silviculture experiments (LSSEs) in western Washington and Oregon with which the Pacific Northwest Research Station of the USDA Forest Service is substantially involved. We compiled and arrayed information about the LSSEs as a series of matrices in a relational database, which is included on the compact disc published with this report and...
ERIC Educational Resources Information Center
Martin, Andrew J.; Lazendic, Goran
2018-01-01
With the rise of large-scale academic assessment programs around the world, there is a need to better understand the factors predicting students' achievement in these assessment exercises. This investigation into national numeracy assessment drew on ecological and transactional conceptualizing involving student, student/home, and school factors.…
Gravitational waves from non-Abelian gauge fields at a tachyonic transition
NASA Astrophysics Data System (ADS)
Tranberg, Anders; Tähtinen, Sara; Weir, David J.
2018-04-01
We compute the gravitational wave spectrum from a tachyonic preheating transition of a Standard Model-like SU(2)-Higgs system. Tachyonic preheating involves exponentially growing IR modes, at scales as large as the horizon. Such a transition at the electroweak scale could be detectable by LISA, if these non-perturbatively large modes translate into non-linear dynamics sourcing gravitational waves. Through large-scale numerical simulations, we find that the spectrum of gravitational waves does not exhibit such IR features. Instead, we find two peaks corresponding to the Higgs and gauge field mass, respectively. We find that the gravitational wave production is reduced when adding non-Abelian gauge fields to a scalar-only theory, but increases when adding Abelian gauge fields. In particular, gauge fields suppress the gravitational wave spectrum in the IR. A tachyonic transition in the early Universe will therefore not be detectable by LISA, even if it involves non-Abelian gauge fields.
Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy
2012-11-01
Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Commercializing Biological Control
ERIC Educational Resources Information Center
LeLeu, K. L.; Young, M. A.
1973-01-01
Describes the only commercial establishment involved in biological control in Australia. The wasp Aphitis melinus, which parasitizes the insect Red Scale, is bred in large numbers and released in the citrus groves where Red Scale is causing damage to the fruit. (JR)
Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2016-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology, size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and small…
Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2017-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology and size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and…
NASA Technical Reports Server (NTRS)
Furlong, G Chester; Mchugh, James G
1957-01-01
An analysis of the longitudinal characteristics of swept wings which is based on available large-scale low-speed data and supplemented with low-scale data when feasible is presented. The emphasis has been placed on the differentiation of the characteristics by a differentiation between the basic flow phenomenon involved. Insofar as possible all large-scale data available as of August 15, 1951 have been summarized in tabular form for ready reference.
Tracey S. Frescino; Gretchen G. Moisen
2009-01-01
The Interior-West, Forest Inventory and Analysis (FIA), Nevada Photo-Based Inventory Pilot (NPIP), launched in 2004, involved acquisition, processing, and interpretation of large scale aerial photographs on a subset of FIA plots (both forest and nonforest) throughout the state of Nevada. Two objectives of the pilot were to use the interpreted photo data to enhance...
ERIC Educational Resources Information Center
Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena; Deehan, James
2016-01-01
In this paper, we present the results from a study of the impact on students involved in a large-scale inquiry-based astronomical high school education intervention in Australia. Students in this intervention were led through an educational design allowing them to undertake an investigative approach to understanding the lifecycle of stars more…
ERIC Educational Resources Information Center
Woodhead, Martin
Those involved in early childhood development must recognize that many of their most cherished beliefs about what is best for children are cultural constructions. This book focuses on quality in large-scale programs for disadvantaged young children in a variety of cultural settings. Chapter 1, "Changing Childhoods," discusses issues…
ERIC Educational Resources Information Center
Cox, Cristián; Meckes, Lorena
2016-01-01
Since the 1990s, Chile has participated in all major international large-scale assessment studies (ILSAs) of the IEA and OECD, as well as the regional ones conducted by UNESCO in Latin America, after it had been involved in the very first international Science Study in 1970-1971. This article examines the various ways in which these studies have…
Development and Validation of a Spanish Version of the Grit-S Scale
Arco-Tirado, Jose L.; Fernández-Martín, Francisco D.; Hoyle, Rick H.
2018-01-01
This paper describes the development and initial validation of a Spanish version of the Short Grit (Grit-S) Scale. The Grit-S Scale was adapted and translated into Spanish using the Translation, Review, Adjudication, Pre-testing, and Documentation model and responses to a preliminary set of items from a large sample of university students (N = 1,129). The resultant measure was validated using data from a large stratified random sample of young adults (N = 1,826). Initial validation involved evaluating the internal consistency of the adapted scale and its subscales and comparing the factor structure of the adapted version to that of the original scale. The results were comparable to results from similar analyses of the English version of the scale. Although the internal consistency of the subscales was low, the internal consistency of the full scale was well-within the acceptable range. A two-factor model offered an acceptable account of the data; however, when a single correlated error involving two highly similar items was included, a single factor model fit the data very well. The results support the use of overall scores from the Spanish Grit-S Scale in future research. PMID:29467705
Scaling up to address data science challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, Joanne R.
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
Scaling up to address data science challenges
Wendelberger, Joanne R.
2017-04-27
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
Liu, Ke; Zhang, Jian; Bao, Jie
2015-11-01
A two stage hydrolysis of corn stover was designed to solve the difficulties between sufficient mixing at high solids content and high power input encountered in large scale bioreactors. The process starts with the quick liquefaction to convert solid cellulose to liquid slurry with strong mixing in small reactors, then followed the comprehensive hydrolysis to complete saccharification into fermentable sugars in large reactors without agitation apparatus. 60% of the mixing energy consumption was saved by removing the mixing apparatus in large scale vessels. Scale-up ratio was small for the first step hydrolysis reactors because of the reduced reactor volume. For large saccharification reactors in the second step, the scale-up was easy because of no mixing mechanism was involved. This two stage hydrolysis is applicable for either simple hydrolysis or combined fermentation processes. The method provided a practical process option for industrial scale biorefinery processing of lignocellulose biomass. Copyright © 2015 Elsevier Ltd. All rights reserved.
Drought forecasting in Luanhe River basin involving climatic indices
NASA Astrophysics Data System (ADS)
Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.
2017-11-01
Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.
Statistical mechanics of soft-boson phase transitions
NASA Technical Reports Server (NTRS)
Gupta, Arun K.; Hill, Christopher T.; Holman, Richard; Kolb, Edward W.
1991-01-01
The existence of structure on large (100 Mpc) scales, and limits to anisotropies in the cosmic microwave background radiation (CMBR), have imperiled models of structure formation based solely upon the standard cold dark matter scenario. Novel scenarios, which may be compatible with large scale structure and small CMBR anisotropies, invoke nonlinear fluctuations in the density appearing after recombination, accomplished via the use of late time phase transitions involving ultralow mass scalar bosons. Herein, the statistical mechanics are studied of such phase transitions in several models involving naturally ultralow mass pseudo-Nambu-Goldstone bosons (pNGB's). These models can exhibit several interesting effects at high temperature, which is believed to be the most general possibilities for pNGB's.
Scale problems in reporting landscape pattern at the regional scale
R.V. O' Neill; C.T. Hunsaker; S.P. Timmins; B.L. Jackson; K.B. Jones; Kurt H. Riitters; James D. Wickham
1996-01-01
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distribu-tions of landscape indices illustrate problems associated with the grain or resolution of the data. Grain should be 2 to 5 times smaller than the...
Evaluating the Large-Scale Environment of Extreme Events Using Reanalyses
NASA Astrophysics Data System (ADS)
Bosilovich, M. G.; Schubert, S. D.; Koster, R. D.; da Silva, A. M., Jr.; Eichmann, A.
2014-12-01
Extreme conditions and events have always been a long standing concern in weather forecasting and national security. While some evidence indicates extreme weather will increase in global change scenarios, extremes are often related to the large scale atmospheric circulation, but also occurring infrequently. Reanalyses assimilate substantial amounts of weather data and a primary strength of reanalysis data is the representation of the large-scale atmospheric environment. In this effort, we link the occurrences of extreme events or climate indicators to the underlying regional and global weather patterns. Now, with greater than 3o years of data, reanalyses can include multiple cases of extreme events, and thereby identify commonality among the weather to better characterize the large-scale to global environment linked to the indicator or extreme event. Since these features are certainly regionally dependent, and also, the indicators of climate are continually being developed, we outline various methods to analyze the reanalysis data and the development of tools to support regional evaluation of the data. Here, we provide some examples of both individual case studies and composite studies of similar events. For example, we will compare the large scale environment for Northeastern US extreme precipitation with that of highest mean precipitation seasons. Likewise, southerly winds can shown to be a major contributor to very warm days in the Northeast winter. While most of our development has involved NASA's MERRA reanalysis, we are also looking forward to MERRA-2 which includes several new features that greatly improve the representation of weather and climate, especially for the regions and sectors involved in the National Climate Assessment.
Book Review: Large-Scale Ecosystem Restoration: Five Case Studies from the United States
Broad-scale ecosystem restoration efforts involve a very complex set of ecological and societal components, and the success of any ecosystem restoration project rests on an integrated approach to implementation. Editors Mary Doyle and Cynthia Drew have successfully synthesized ma...
Measurement of Thunderstorm Cloud-Top Parameters Using High-Frequency Satellite Imagery
1978-01-01
short wave was present well to the south of this system approximately 2000 ka west of Baja California. Two distinct flow patterns were present, one...view can be observed in near real time whereas radar observations, although excellent for local purposes, involve substantial errors when composited...on a large scale. The time delay in such large scale compositing is critical when attempting to monitor convective cloud systems for a potential
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
Branco, Sara; Bi, Ke; Liao, Hui-Ling; Gladieux, Pierre; Badouin, Hélène; Ellison, Christopher E.; Nguyen, Nhu H.; Vilgalys, Rytas; Peay, Kabir G.; Taylor, John W.; Bruns, Thomas D.
2016-01-01
Recent advancements in sequencing technology allowed researchers to better address the patterns and mechanisms involved in microbial environmental adaptation at large spatial scales. Here we investigated the genomic basis of adaptation to climate at the continental scale in Suillus brevipes, an ectomycorrhizal fungus symbiotically associated with the roots of pine trees. We used genomic data from 55 individuals in seven locations across North America to perform genome scans to detect signatures of positive selection and assess whether temperature and precipitation were associated with genetic differentiation. We found that S. brevipes exhibited overall strong population differentiation, with potential admixture in Canadian populations. This species also displayed genomic signatures of positive selection as well as genomic sites significantly associated with distinct climatic regimes and abiotic environmental parameters. These genomic regions included genes involved in transmembrane transport of substances and helicase activity potentially involved in cold stress response. Our study sheds light on large-scale environmental adaptation in fungi by identifying putative adaptive genes and providing a framework to further investigate the genetic basis of fungal adaptation. PMID:27761941
Large-Scale Brain Systems in ADHD: Beyond the Prefrontal-Striatal Model
Castellanos, F. Xavier; Proal, Erika
2012-01-01
Attention-deficit/hyperactivity disorder (ADHD) has long been thought to reflect dysfunction of prefrontal-striatal circuitry, with involvement of other circuits largely ignored. Recent advances in systems neuroscience-based approaches to brain dysfunction enable the development of models of ADHD pathophysiology that encompass a number of different large-scale “resting state” networks. Here we review progress in delineating large-scale neural systems and illustrate their relevance to ADHD. We relate frontoparietal, dorsal attentional, motor, visual, and default networks to the ADHD functional and structural literature. Insights emerging from mapping intrinsic brain connectivity networks provide a potentially mechanistic framework for understanding aspects of ADHD, such as neuropsychological and behavioral inconsistency, and the possible role of primary visual cortex in attentional dysfunction in the disorder. PMID:22169776
NASA Astrophysics Data System (ADS)
Michioka, Takenobu; Sato, Ayumu; Sada, Koichi
2011-10-01
Large-scale turbulent motions enhancing horizontal gas spread in an atmospheric boundary layer are simulated in a wind-tunnel experiment. The large-scale turbulent motions can be generated using an active grid installed at the front of the test section in the wind tunnel, when appropriate parameters for the angular deflection and the rotation speed are chosen. The power spectra of vertical velocity fluctuations are unchanged with and without the active grid because they are strongly affected by the surface. The power spectra of both streamwise and lateral velocity fluctuations with the active grid increase in the low frequency region, and are closer to the empirical relations inferred from field observations. The large-scale turbulent motions do not affect the Reynolds shear stress, but change the balance of the processes involved. The relative contributions of ejections to sweeps are suppressed by large-scale turbulent motions, indicating that the motions behave as sweep events. The lateral gas spread is enhanced by the lateral large-scale turbulent motions generated by the active grid. The large-scale motions, however, do not affect the vertical velocity fluctuations near the surface, resulting in their having a minimal effect on the vertical gas spread. The peak concentration normalized using the root-mean-squared value of concentration fluctuation is remarkably constant over most regions of the plume irrespective of the operation of the active grid.
Organizational Commitment and Nurses' Characteristics as Predictors of Job Involvement.
Alammar, Kamila; Alamrani, Mashael; Alqahtani, Sara; Ahmad, Muayyad
2016-01-01
To predict nurses' job involvement on the basis of their organizational commitment and personal characteristics at a large tertiary hospital in Saudi Arabia. Data were collected in 2015 from a convenience sample of 558 nurses working at a large tertiary hospital in Riyadh, Saudi Arabia. A cross-sectional correlational design was used in this study. Data were collected using a structured questionnaire. All commitment scales had significant relationships. Multiple linear regression analysis revealed that the model predicted a sizeable proportion of variance in nurses' job involvement (p < 0.001). High organizational commitment enhances job involvement, which may lead to more organizational stability and effectiveness.
NASA Astrophysics Data System (ADS)
Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca
2018-06-01
We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.
Parental Involvement in Norwegian Schools
ERIC Educational Resources Information Center
Paulsen, Jan Merok
2012-01-01
This article examines findings on key challenges of school-parent relations in Norway. The review is based on recent large-scale studies on several issues, including formalized school-parent cooperation, parental involvement in the pedagogical discourse, and teacher perspectives on the parents' role in the school community. Findings suggest a…
Could the electroweak scale be linked to the large scale structure of the Universe?
NASA Technical Reports Server (NTRS)
Chakravorty, Alak; Massarotti, Alessandro
1991-01-01
We study a model where the domain walls are generated through a cosmological phase transition involving a scalar field. We assume the existence of a coupling between the scalar field and dark matter and show that the interaction between domain walls and dark matter leads to an energy dependent reflection mechanism. For a simple Yakawa coupling, we find that the vacuum expectation value of the scalar field is theta approx. equals 30GeV - 1TeV, in order for the model to be successful in the formation of large scale 'pancake' structures.
Stable isotope probing to study functional components of complex microbial ecosystems.
Mazard, Sophie; Schäfer, Hendrik
2014-01-01
This protocol presents a method of dissecting the DNA or RNA of key organisms involved in a specific biochemical process within a complex ecosystem. Stable isotope probing (SIP) allows the labelling and separation of nucleic acids from community members that are involved in important biochemical transformations, yet are often not the most numerically abundant members of a community. This pure culture-independent technique circumvents limitations of traditional microbial isolation techniques or data mining from large-scale whole-community metagenomic studies to tease out the identities and genomic repertoires of microorganisms participating in biological nutrient cycles. SIP experiments can be applied to virtually any ecosystem and biochemical pathway under investigation provided a suitable stable isotope substrate is available. This versatile methodology allows a wide range of analyses to be performed, from fatty-acid analyses, community structure and ecology studies, and targeted metagenomics involving nucleic acid sequencing. SIP experiments provide an effective alternative to large-scale whole-community metagenomic studies by specifically targeting the organisms or biochemical transformations of interest, thereby reducing the sequencing effort and time-consuming bioinformatics analyses of large datasets.
Future of applied watershed science at regional scales
Lee Benda; Daniel Miller; Steve Lanigan; Gordon Reeves
2009-01-01
Resource managers must deal increasingly with land use and conservation plans applied at large spatial scales (watersheds, landscapes, states, regions) involving multiple interacting federal agencies and stakeholders. Access to a geographically focused and application-oriented database would allow users in different locations and with different concerns to quickly...
Placeless Organizations: Collaborating for Transformation
ERIC Educational Resources Information Center
Nardi, Bonnie A.
2007-01-01
This article defines and discusses placeless organizations as sites and generators of learning on a large scale. The emphasis is on how placeless organizations structure themselves to carry out social transformation--necessarily involving intensive learning--on a national or global scale. The argument is made that place is not a necessary…
Validity and Reliability of a Scale Assessing Attitudes toward Mainstreaming.
ERIC Educational Resources Information Center
Green, Kathy; And Others
1983-01-01
In a study involving 50 undergraduate and graduate education students the Attitudes Toward Mainstreaming Scale was found to have a large first factor with adequate reliability. There was a low correlation between knowledge and ATMS scores, although knowledge was more strongly related to classroom acceptance of exceptional students. (CL)
de Boysson, Hubert; Boulouis, Grégoire; Aouba, Achille; Bienvenu, Boris; Guillevin, Loïc; Zuber, Mathieu; Touzé, Emmanuel; Naggara, Olivier; Pagnoux, Christian
2017-03-01
We aimed to identify whether presentations and outcomes in adult patients with isolated small-vessel primary angiitis of the CNS (PACNS) would differ from other patients with large/medium-vessel involvement. In the French PACNS cohort, we compared the characteristics, treatments and outcomes of patients with isolated small-vessel disease (normal CT, MR and/or conventional angiograms, brain biopsy positive for vasculitis) with other patients who had large/medium-vessel involvement (vessel abnormalities on CT, MR or conventional angiograms). A good functional outcome was defined as a modified Rankin scale ⩽2 at last follow-up, regardless of the occurrence of relapse. Among the 102 patients in the cohort, 26 (25%) had isolated small-vessel PACNS, whereas the 76 others demonstrated large/medium-vessel involvement. Patients with isolated small-vessel PACNS had more seizures (P < 0.0001), cognitive (P = 0.02) or consciousness impairment (P = 0.03) and more dyskinesias (P = 0.002) but less focal deficits (P = 0.0002) than other PACNS patients. They also had more abnormal cerebrospinal fluid analysis (P = 0.008) and gadolinium enhancements on MRI (P = 0.001) but less frequent acute ischaemic lesions (P < 0.0001) than patients with large/medium-vessel involvement. Treatments and modified Rankin scale at last follow-up did not differ between groups. Thirty-two (31%) patients relapsed; 14 (54%) with isolated small-vessel PACNS vs 18 (24%) with large/medium-vessel involvement (P = 0.004). Eight patients died, with no difference between the groups (P = 0.97). In our cohort, adult patients with isolated small-vessel PACNS presented some distinct disease features and relapsed more often than other PACNS patients who had large/medium-vessel involvement. Functional outcomes and mortality did not differ. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Perspectives on integrated modeling of transport processes in semiconductor crystal growth
NASA Technical Reports Server (NTRS)
Brown, Robert A.
1992-01-01
The wide range of length and time scales involved in industrial scale solidification processes is demonstrated here by considering the Czochralski process for the growth of large diameter silicon crystals that become the substrate material for modern microelectronic devices. The scales range in time from microseconds to thousands of seconds and in space from microns to meters. The physics and chemistry needed to model processes on these different length scales are reviewed.
Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul
2008-01-01
With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.
Validity of Scores for a Developmental Writing Scale Based on Automated Scoring
ERIC Educational Resources Information Center
Attali, Yigal; Powers, Donald
2009-01-01
A developmental writing scale for timed essay-writing performance was created on the basis of automatically computed indicators of writing fluency, word choice, and conventions of standard written English. In a large-scale data collection effort that involved a national sample of more than 12,000 students from 4th, 6th, 8th, 10th, and 12th grade,…
2012-05-01
pressures on supply that led to the global food crisis of 2007 and 2008, allowing prices to fall from their peak in August 2008, the foundational...involved in the acquisition of farmland.9 This trend is also unlikely to slow, with food prices continuing to climb, surpassing the highs of 2007 and...and general secrecy in most large-scale land acquisition contracts, exact data regarding the number of deals and amount of land transferred are
Commentary: Environmental nanophotonics and energy
NASA Astrophysics Data System (ADS)
Smith, Geoff B.
2011-01-01
The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.
USDA-ARS?s Scientific Manuscript database
Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs). The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the compu...
Large Emergency-Response Exercises: Qualitative Characteristics--A Survey
ERIC Educational Resources Information Center
Lee, Yang-Im; Trim, Peter; Upton, Julia; Upton, David
2009-01-01
Exercises, drills, or simulations are widely used, by governments, agencies and commercial organizations, to simulate serious incidents and train staff how to respond to them. International cooperation has led to increasingly large-scale exercises, often involving hundreds or even thousands of participants in many locations. The difference between…
ERIC Educational Resources Information Center
Reynolds, Arthur J.; Hayakawa, Momoko; Ou, Suh-Ruu; Mondi, Christina F.; Englund, Michelle M.; Candee, Allyson J.; Smerillo, Nicole E.
2017-01-01
We describe the development, implementation, and evaluation of a comprehensive preschool to third grade prevention program for the goals of sustaining services at a large scale. The Midwest Child-Parent Center (CPC) Expansion is a multilevel collaborative school reform model designed to improve school achievement and parental involvement from ages…
Formulating a subgrid-scale breakup model for microbubble generation from interfacial collisions
NASA Astrophysics Data System (ADS)
Chan, Wai Hong Ronald; Mirjalili, Shahab; Urzay, Javier; Mani, Ali; Moin, Parviz
2017-11-01
Multiphase flows often involve impact events that engender important effects like the generation of a myriad of tiny bubbles that are subsequently transported in large liquid bodies. These impact events are created by large-scale phenomena like breaking waves on ocean surfaces, and often involve the relative approach of liquid surfaces. This relative motion generates continuously shrinking length scales as the entrapped gas layer thins and eventually breaks up into microbubbles. The treatment of this disparity in length scales is computationally challenging. In this presentation, a framework is presented that addresses a subgrid-scale (SGS) model aimed at capturing the process of microbubble generation. This work sets up the components in an overarching volume-of-fluid (VoF) toolset and investigates the analytical foundations of an SGS model for describing the breakup of a thin air film trapped between two approaching water bodies in a physical regime corresponding to Mesler entrainment. Constituents of the SGS model, such as the identification of impact events and the accurate computation of the local characteristic curvature in a VoF-based architecture, and the treatment of the air layer breakup, are discussed and illustrated in simplified scenarios. Supported by Office of Naval Research (ONR)/A*STAR (Singapore).
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
Integration and segregation of large-scale brain networks during short-term task automatization
Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F.; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes
2016-01-01
The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes. PMID:27808095
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
NASA Astrophysics Data System (ADS)
Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.
2013-12-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.
Chemical Warfare and Medical Response During World War I
Fitzgerald, Gerard J.
2008-01-01
The first large-scale use of a traditional weapon of mass destruction (chemical, biological, or nuclear) involved the successful deployment of chemical weapons during World War I (1914–1918). Historians now refer to the Great War as the chemist’s war because of the scientific and engineering mobilization efforts by the major belligerents. The development, production, and deployment of war gases such as chlorine, phosgene, and mustard created a new and complex public health threat that endangered not only soldiers and civilians on the battlefield but also chemical workers on the home front involved in the large-scale manufacturing processes. The story of chemical weapons research and development during that war provides useful insights for current public health practitioners faced with a possible chemical weapons attack against civilian or military populations. PMID:18356568
Chemical warfare and medical response during World War I.
Fitzgerald, Gerard J
2008-04-01
The first large-scale use of a traditional weapon of mass destruction (chemical, biological, or nuclear) involved the successful deployment of chemical weapons during World War I (1914-1918). Historians now refer to the Great War as the chemist's war because of the scientific and engineering mobilization efforts by the major belligerents. The development, production, and deployment of war gases such as chlorine, phosgene, and mustard created a new and complex public health threat that endangered not only soldiers and civilians on the battlefield but also chemical workers on the home front involved in the large-scale manufacturing processes. The story of chemical weapons research and development during that war provides useful insights for current public health practitioners faced with a possible chemical weapons attack against civilian or military populations.
Enabling large-scale viscoelastic calculations via neural network acceleration
NASA Astrophysics Data System (ADS)
Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.
2017-12-01
One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.
Shear-driven dynamo waves at high magnetic Reynolds number.
Tobias, S M; Cattaneo, F
2013-05-23
Astrophysical magnetic fields often display remarkable organization, despite being generated by dynamo action driven by turbulent flows at high conductivity. An example is the eleven-year solar cycle, which shows spatial coherence over the entire solar surface. The difficulty in understanding the emergence of this large-scale organization is that whereas at low conductivity (measured by the magnetic Reynolds number, Rm) dynamo fields are well organized, at high Rm their structure is dominated by rapidly varying small-scale fluctuations. This arises because the smallest scales have the highest rate of strain, and can amplify magnetic field most efficiently. Therefore most of the effort to find flows whose large-scale dynamo properties persist at high Rm has been frustrated. Here we report high-resolution simulations of a dynamo that can generate organized fields at high Rm; indeed, the generation mechanism, which involves the interaction between helical flows and shear, only becomes effective at large Rm. The shear does not enhance generation at large scales, as is commonly thought; instead it reduces generation at small scales. The solution consists of propagating dynamo waves, whose existence was postulated more than 60 years ago and which have since been used to model the solar cycle.
String-like collective motion in the α- and β-relaxation of a coarse-grained polymer melt
NASA Astrophysics Data System (ADS)
Pazmiño Betancourt, Beatriz A.; Starr, Francis W.; Douglas, Jack F.
2018-03-01
Relaxation in glass-forming liquids occurs as a multi-stage hierarchical process involving cooperative molecular motion. First, there is a "fast" relaxation process dominated by the inertial motion of the molecules whose amplitude grows upon heating, followed by a longer time α-relaxation process involving both large-scale diffusive molecular motion and momentum diffusion. Our molecular dynamics simulations of a coarse-grained glass-forming polymer melt indicate that the fast, collective motion becomes progressively suppressed upon cooling, necessitating large-scale collective motion by molecular diffusion for the material to relax approaching the glass-transition. In each relaxation regime, the decay of the collective intermediate scattering function occurs through collective particle exchange motions having a similar geometrical form, and quantitative relationships are derived relating the fast "stringlet" collective motion to the larger scale string-like collective motion at longer times, which governs the temperature-dependent activation energies associated with both thermally activated molecular diffusion and momentum diffusion.
Impact of oceanic-scale interactions on the seasonal modulation of ocean dynamics by the atmosphere.
Sasaki, Hideharu; Klein, Patrice; Qiu, Bo; Sasai, Yoshikazu
2014-12-15
Ocean eddies (with a size of 100-300 km), ubiquitous in satellite observations, are known to represent about 80% of the total ocean kinetic energy. Recent studies have pointed out the unexpected role of smaller oceanic structures (with 1-50 km scales) in generating and sustaining these eddies. The interpretation proposed so far invokes the internal instability resulting from the large-scale interaction between upper and interior oceanic layers. Here we show, using a new high-resolution simulation of the realistic North Pacific Ocean, that ocean eddies are instead sustained by a different process that involves small-scale mixed-layer instabilities set up by large-scale atmospheric forcing in winter. This leads to a seasonal evolution of the eddy kinetic energy in a very large part of this ocean, with an amplitude varying by a factor almost equal to 2. Perspectives in terms of the impacts on climate dynamics and future satellite observational systems are briefly discussed.
Impact of oceanic-scale interactions on the seasonal modulation of ocean dynamics by the atmosphere
Sasaki, Hideharu; Klein, Patrice; Qiu, Bo; Sasai, Yoshikazu
2014-01-01
Ocean eddies (with a size of 100–300 km), ubiquitous in satellite observations, are known to represent about 80% of the total ocean kinetic energy. Recent studies have pointed out the unexpected role of smaller oceanic structures (with 1–50 km scales) in generating and sustaining these eddies. The interpretation proposed so far invokes the internal instability resulting from the large-scale interaction between upper and interior oceanic layers. Here we show, using a new high-resolution simulation of the realistic North Pacific Ocean, that ocean eddies are instead sustained by a different process that involves small-scale mixed-layer instabilities set up by large-scale atmospheric forcing in winter. This leads to a seasonal evolution of the eddy kinetic energy in a very large part of this ocean, with an amplitude varying by a factor almost equal to 2. Perspectives in terms of the impacts on climate dynamics and future satellite observational systems are briefly discussed. PMID:25501039
Large-scale oscillatory calcium waves in the immature cortex.
Garaschuk, O; Linn, J; Eilers, J; Konnerth, A
2000-05-01
Two-photon imaging of large neuronal networks in cortical slices of newborn rats revealed synchronized oscillations in intracellular Ca2+ concentration. These spontaneous Ca2+ waves usually started in the posterior cortex and propagated slowly (2.1 mm per second) toward its anterior end. Ca2+ waves were associated with field-potential changes and required activation of AMPA and NMDA receptors. Although GABAA receptors were not involved in wave initiation, the developmental transition of GABAergic transmission from depolarizing to hyperpolarizing (around postnatal day 7) stopped the oscillatory activity. Thus we identified a type of large-scale Ca2+ wave that may regulate long-distance wiring in the immature cortex.
NASA Astrophysics Data System (ADS)
Best, J.
2004-05-01
The origin and scaling of large-scale coherent flow structures has been of central interest in furthering understanding of the nature of turbulent boundary layers, and recent work has shown the presence of large-scale turbulent flow structures that may extend through the whole flow depth. Such structures may dominate the entrainment of bedload sediment and advection of fine sediment in suspension. However, we still know remarkably little of the interactions between the dynamics of coherent flow structures and sediment transport, and its implications for ecosystem dynamics. This paper will discuss the first results of two-phase particle imaging velocimetry (PIV) that has been used to visualize large-scale turbulent flow structures moving over a flat bed in a water channel, and the motion of sand particles within these flows. The talk will outline the methodology, involving the fluorescent tagging of sediment and its discrimination from the fluid phase, and show results that illustrate the key role of these large-scale structures in the transport of sediment. Additionally, the presence of these structures will be discussed in relation to the origin of vorticity within flat-bed boundary layers and recent models that envisage these large-scale motions as being linked to whole-flow field structures. Discussion will focus on if these recent models simply reflect the organization of turbulent boundary layer structure and vortex packets, some of which are amply visualised at the laminar-turbulent transition.
So What's New? A Survey of the Educational Policies of Orchestras and Opera Companies
ERIC Educational Resources Information Center
Winterson, Julia
2010-01-01
The creative music workshop involving professional players was intended to give direct support to school teachers and to enhance music in the classroom. However, today's large-scale, high-profile projects mounted by orchestras and opera companies appear to be developing into a full-scale industry on their own, their role in partnership with…
NASA Astrophysics Data System (ADS)
Tellmann, S.; Häusler, B.; Hinson, D. P.; Tyler, G. L.; Andert, T. P.; Bird, M. K.; Imamura, T.; Pätzold, M.; Remus, S.
2014-04-01
Atmospheric waves on almost all spatial scales have been observed in the Venus atmosphere in various atmospheric regions. They play a crucial role in the redistribution of energy, momentum, and atmospheric constituent and are thought to be involved in the development and maintenance of the atmospheric superrotation.
ERIC Educational Resources Information Center
Gallagher, Mary Jean; Malloy, John; Ryerson, Rachel
2016-01-01
This paper offers an insiders' perspective on the large-scale, system-wide educational change undertaken in Ontario, Canada from 2003 to the present. The authors, Ministry and school system leaders intimately involved in this change process, explore how Ontario has come to be internationally recognized as an equitable, high-achieving, and…
NASA Technical Reports Server (NTRS)
Liu, J. T. C.
1986-01-01
Advances in the mechanics of boundary layer flow are reported. The physical problems of large scale coherent structures in real, developing free turbulent shear flows, from the nonlinear aspects of hydrodynamic stability are addressed. The presence of fine grained turbulence in the problem, and its absence, lacks a small parameter. The problem is presented on the basis of conservation principles, which are the dynamics of the problem directed towards extracting the most physical information, however, it is emphasized that it must also involve approximations.
Research Activities at Fermilab for Big Data Movement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mhashilkar, Parag; Wu, Wenji; Kim, Hyun W
2013-01-01
Adaptation of 100GE Networking Infrastructure is the next step towards management of Big Data. Being the US Tier-1 Center for the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experiment and the central data center for several other large-scale research collaborations, Fermilab has to constantly deal with the scaling and wide-area distribution challenges of the big data. In this paper, we will describe some of the challenges involved in the movement of big data over 100GE infrastructure and the research activities at Fermilab to address these challenges.
ERIC Educational Resources Information Center
Scott, Graham W.; Boyd, Margaret
2016-01-01
This paper demonstrates the positive impact of learning through ecological fieldwork upon children's ability to write, and to write about science. Specifically we have carried out a relatively large-scale study (involving 379 children aged 9-11 years from 8 primary schools in North East England) comparing intervention classes (involved in…
ERIC Educational Resources Information Center
Busseri, Michael A.; Willoughby, Teena; Chalmers, Heather; Bogaert, Anthony F.
2008-01-01
On the basis of a large-scale survey of high-school youth, the authors compared adolescents reporting exclusively heterosexual, mostly heterosexual, bisexual, and predominately same-sex attraction based on high-risk involvement across a range of risk behaviors. Bisexual and same-sex attracted groups were characterized by heightened high-risk…
A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling
NASA Technical Reports Server (NTRS)
Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne
2003-01-01
Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.
NASA Technical Reports Server (NTRS)
Jackson, Karen E.
1990-01-01
Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.
Lightweight computational steering of very large scale molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beazley, D.M.; Lomdahl, P.S.
1996-09-01
We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less
A multidisciplinary approach to the development of low-cost high-performance lightwave networks
NASA Technical Reports Server (NTRS)
Maitan, Jacek; Harwit, Alex
1991-01-01
Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
A large-scale perspective on stress-induced alterations in resting-state networks
NASA Astrophysics Data System (ADS)
Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron
2016-02-01
Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.
Role of optometry school in single day large scale school vision testing
Anuradha, N; Ramani, Krishnakumar
2015-01-01
Background: School vision testing aims at identification and management of refractive errors. Large-scale school vision testing using conventional methods is time-consuming and demands a lot of chair time from the eye care professionals. A new strategy involving a school of optometry in single day large scale school vision testing is discussed. Aim: The aim was to describe a new approach of performing vision testing of school children on a large scale in a single day. Materials and Methods: A single day vision testing strategy was implemented wherein 123 members (20 teams comprising optometry students and headed by optometrists) conducted vision testing for children in 51 schools. School vision testing included basic vision screening, refraction, frame measurements, frame choice and referrals for other ocular problems. Results: A total of 12448 children were screened, among whom 420 (3.37%) were identified to have refractive errors. 28 (1.26%) children belonged to the primary, 163 to middle (9.80%), 129 (4.67%) to secondary and 100 (1.73%) to the higher secondary levels of education respectively. 265 (2.12%) children were referred for further evaluation. Conclusion: Single day large scale school vision testing can be adopted by schools of optometry to reach a higher number of children within a short span. PMID:25709271
ERIC Educational Resources Information Center
Lester, Benjamin T.; Ma, Li; Lee, Okhee; Lambert, Julie
2006-01-01
As part of a large-scale instructional intervention research, this study examined elementary students' science knowledge and awareness of social activism with regard to an increased greenhouse effect and global warming. The study involved fifth-grade students from five elementary schools of varying demographic makeup in a large urban school…
ERIC Educational Resources Information Center
Levin, Ben
2013-01-01
This brief discusses the problem of scaling innovations in education in the United States so that they can serve very large numbers of students. It begins with a general discussion of the issues involved, develops a set of five criteria for assessing challenges of scaling, and then uses three programs widely discussed in the U.S. as examples of…
Environment and host as large-scale controls of ectomycorrhizal fungi.
van der Linde, Sietse; Suz, Laura M; Orme, C David L; Cox, Filipa; Andreae, Henning; Asi, Endla; Atkinson, Bonnie; Benham, Sue; Carroll, Christopher; Cools, Nathalie; De Vos, Bruno; Dietrich, Hans-Peter; Eichhorn, Johannes; Gehrmann, Joachim; Grebenc, Tine; Gweon, Hyun S; Hansen, Karin; Jacob, Frank; Kristöfel, Ferdinand; Lech, Paweł; Manninger, Miklós; Martin, Jan; Meesenburg, Henning; Merilä, Päivi; Nicolas, Manuel; Pavlenda, Pavel; Rautio, Pasi; Schaub, Marcus; Schröck, Hans-Werner; Seidling, Walter; Šrámek, Vít; Thimonier, Anne; Thomsen, Iben Margrete; Titeux, Hugues; Vanguelova, Elena; Verstraeten, Arne; Vesterdal, Lars; Waldner, Peter; Wijk, Sture; Zhang, Yuxin; Žlindra, Daniel; Bidartondo, Martin I
2018-06-06
Explaining the large-scale diversity of soil organisms that drive biogeochemical processes-and their responses to environmental change-is critical. However, identifying consistent drivers of belowground diversity and abundance for some soil organisms at large spatial scales remains problematic. Here we investigate a major guild, the ectomycorrhizal fungi, across European forests at a spatial scale and resolution that is-to our knowledge-unprecedented, to explore key biotic and abiotic predictors of ectomycorrhizal diversity and to identify dominant responses and thresholds for change across complex environmental gradients. We show the effect of 38 host, environment, climate and geographical variables on ectomycorrhizal diversity, and define thresholds of community change for key variables. We quantify host specificity and reveal plasticity in functional traits involved in soil foraging across gradients. We conclude that environmental and host factors explain most of the variation in ectomycorrhizal diversity, that the environmental thresholds used as major ecosystem assessment tools need adjustment and that the importance of belowground specificity and plasticity has previously been underappreciated.
NASA Astrophysics Data System (ADS)
Saksena, S.; Merwade, V.; Singhofen, P.
2017-12-01
There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.
Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi
2017-10-10
We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.
Squire, J.; Bhattacharjee, A.
2016-03-14
A novel large-scale dynamo mechanism, the magnetic shear-current effect, is discussed and explored. Here, the effect relies on the interaction of magnetic fluctuations with a mean shear flow, meaning the saturated state of the small-scale dynamo can drive a large-scale dynamo – in some sense the inverse of dynamo quenching. The dynamo is non-helical, with the mean fieldmore » $${\\it\\alpha}$$coefficient zero, and is caused by the interaction between an off-diagonal component of the turbulent resistivity and the stretching of the large-scale field by shear flow. Following up on previous numerical and analytic work, this paper presents further details of the numerical evidence for the effect, as well as an heuristic description of how magnetic fluctuations can interact with shear flow to produce the required electromotive force. The pressure response of the fluid is fundamental to this mechanism, which helps explain why the magnetic effect is stronger than its kinematic cousin, and the basic idea is related to the well-known lack of turbulent resistivity quenching by magnetic fluctuations. As well as being interesting for its applications to general high Reynolds number astrophysical turbulence, where strong small-scale magnetic fluctuations are expected to be prevalent, the magnetic shear-current effect is a likely candidate for large-scale dynamo in the unstratified regions of ionized accretion disks. Evidence for this is discussed, as well as future research directions and the challenges involved with understanding details of the effect in astrophysically relevant regimes.« less
Microwave evidence for large-scale changes associated with a filament eruption
NASA Technical Reports Server (NTRS)
Kundu, M. R.; Schmahl, E. J.; Fu, Q.-J.
1989-01-01
VLA observations at 6 and 20 cm wavelengths taken on August 3, 1985 are presented, showing an eruptive filament event in which microwave emission originated in two widely separated regions during the disintegration of the filament. The amount of heat required for the enhancement is estimated. Near-simultaneous changes in intensity and polarization were observed in the western components of the northern and southern regions. It is suggested that large-scale magnetic interconnections permitted the two regions to respond similarly to an external energy or mass source involved in the disruption of the filament.
How large is large enough for insects? Forest fragmentation effects at three spatial scales
NASA Astrophysics Data System (ADS)
Ribas, C. R.; Sobrinho, T. G.; Schoereder, J. H.; Sperber, C. F.; Lopes-Andrade, C.; Soares, S. M.
2005-02-01
Several mechanisms may lead to species loss in fragmented habitats, such as edge and shape effects, loss of habitat and heterogeneity. Ants and crickets were sampled in 18 forest remnants in south-eastern Brazil, to test whether a group of small remnants maintains the same insect species richness as similar sized large remnants, at three spatial scales. We tested hypotheses about alpha and gamma diversity to explain the results. Groups of remnants conserve as many species of ants as a single one. Crickets, however, showed a scale-dependent pattern: at small scales there was no significant or important difference between groups of remnants and a single one, while at the larger scale the group of remnants maintained more species. Alpha diversity (local species richness) was similar in a group of remnants and in a single one, at the three spatial scales, both for ants and crickets. Gamma diversity, however, varied both with taxa (ants and crickets) and spatial scale, which may be linked to insect mobility, remnant isolation, and habitat heterogeneity. Biological characteristics of the organisms involved have to be considered when studying fragmentation effects, as well as spatial scale at which it operates. Mobility of the organisms influences fragmentation effects, and consequently conservation strategies.
Policy and administrative issues for large-scale clinical interventions following disasters.
Scheeringa, Michael S; Cobham, Vanessa E; McDermott, Brett
2014-02-01
Large, programmatic mental health intervention programs for children and adolescents following disasters have become increasingly common; however, little has been written about the key goals and challenges involved. Using available data and the authors' experiences, this article reviews the factors involved in planning and implementing large-scale treatment programs following disasters. These issues include funding, administration, choice of clinical targets, workforce selection, choice of treatment modalities, training, outcome monitoring, and consumer uptake. Ten factors are suggested for choosing among treatment modalities: 1) reach (providing access to the greatest number), 2) retention of patients, 3) privacy, 4) parental involvement, 5) familiarity of the modality to clinicians, 6) intensity (intervention type matches symptom acuity and impairment of patient), 7) burden to the clinician (in terms of time, travel, and inconvenience), 8) cost, 9) technology needs, and 10) effect size. Traditionally, after every new disaster, local leaders who have never done so before have had to be recruited to design, administer, and implement programs. As expertise in all of these areas represents a gap for most local professionals in disaster-affected areas, we propose that a central, nongovernmental agency with national or international scope be created that can consult flexibly with local leaders following disasters on both overarching and specific issues. We propose recommendations and point out areas in greatest need of innovation.
Danis, Ildiko; Scheuring, Noemi; Papp, Eszter; Czinner, Antal
2012-06-01
A new instrument for assessing depressive mood, the first version of Depression Scale Questionnaire (DS1K) was published in 2008 by Halmai et al. This scale was used in our large sample study, in the framework of the For Healthy Offspring project, involving parents of young children. The original questionnaire was developed in small samples, so our aim was to assist further development of the instrument by the psychometric analysis of the data in our large sample (n=1164). The DS1K scale was chosen to measure the parents' mood and mental state in the For Healthy Offspring project. The questionnaire was completed by 1063 mothers and 328 fathers, yielding a heterogenous sample with respect to age and socio-demographic status. Analyses included main descriptive statistics, establishing the scales' inner consistency and some comparisons. Results were checked in our original and multiple imputed datasets as well. According to our results the reliability of our scale was much worse than in the original study (Cronbach alpha: 0.61 versus 0.88). During the detailed item-analysis it became clear that two items contributed to the observed decreased coherence. We assumed a problem related to misreading in case of one of these items. This assumption was checked by cross-analysis by the assumed reading level. According to our results the reliability of the scale was increased in both the lower and higher education level groups if we did not include one or both of these problematic items. However, as the number of items decreased, the relative sensitivity of the scale was also reduced, with fewer persons categorized in the risk group compared to the original scale. We suggest for the authors as an alternative solution to redefine the problematic items and retest the reliability of the measurement in a sample with diverse socio-demographic characteristics.
Revision of the Rawls et al. (1982) pedotransfer functions for their applicability to US croplands
USDA-ARS?s Scientific Manuscript database
Large scale environmental impact studies typically involve the use of simulation models and require a variety of inputs, some of which may need to be estimated in absence of adequate measured data. As an example, soil water retention needs to be estimated for a large number of soils that are to be u...
Lester O. Dillard; Kevin R. Russell; W. Mark Ford
2008-01-01
The federally threatened Cheat Mountain salamander (Plethodon nettingi; hereafter CMS) is known to occur in approximately 70 small, scattered populations in the Allegheny Mountains of eastern West Virginia, USA. Current conservation and management efforts on federal, state, and private lands involving CMS largely rely on small scale, largely...
ERIC Educational Resources Information Center
Heneman, Herbert G., III; Kimball, Steven; Milanowski, Anthony
2006-01-01
The present study contributes to knowledge of the construct validity of the short form of the Teacher Sense of Efficacy Scale (and by extension, given their similar content and psychometric properties, to the long form). The authors' research involves: (1) examining the psychometric properties of the TSES on a large sample of elementary, middle,…
Gram-scale synthesis of single-crystalline graphene quantum dots with superior optical properties.
Wang, Liang; Wang, Yanli; Xu, Tao; Liao, Haobo; Yao, Chenjie; Liu, Yuan; Li, Zhen; Chen, Zhiwen; Pan, Dengyu; Sun, Litao; Wu, Minghong
2014-10-28
Graphene quantum dots (GQDs) have various alluring properties and potential applications, but their large-scale applications are limited by current synthetic methods that commonly produce GQDs in small amounts. Moreover, GQDs usually exhibit polycrystalline or highly defective structures and thus poor optical properties. Here we report the gram-scale synthesis of single-crystalline GQDs by a facile molecular fusion route under mild and green hydrothermal conditions. The synthesis involves the nitration of pyrene followed by hydrothermal treatment in alkaline aqueous solutions, where alkaline species play a crucial role in tuning their size, functionalization and optical properties. The single-crystalline GQDs are bestowed with excellent optical properties such as bright excitonic fluorescence, strong excitonic absorption bands extending to the visible region, large molar extinction coefficients and long-term photostability. These high-quality GQDs can find a large array of novel applications in bioimaging, biosensing, light emitting diodes, solar cells, hydrogen production, fuel cells and supercapacitors.
Predicting protein functions from redundancies in large-scale protein interaction networks
NASA Technical Reports Server (NTRS)
Samanta, Manoj Pratim; Liang, Shoudan
2003-01-01
Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.
Evolution of Large-Scale Magnetic Fields and State Transitions in Black Hole X-Ray Binaries
NASA Astrophysics Data System (ADS)
Wang, Ding-Xiong; Huang, Chang-Yin; Wang, Jiu-Zhou
2010-04-01
The state transitions of black hole (BH) X-ray binaries are discussed based on the evolution of large-scale magnetic fields, in which the combination of three energy mechanisms are involved: (1) the Blandford-Znajek (BZ) process related to the open field lines connecting a rotating BH with remote astrophysical loads, (2) the magnetic coupling (MC) process related to the closed field lines connecting the BH with its surrounding accretion disk, and (3) the Blandford-Payne (BP) process related to the open field lines connecting the disk with remote astrophysical loads. It turns out that each spectral state of the BH binaries corresponds to each configuration of magnetic field in BH magnetosphere, and the main characteristics of low/hard (LH) state, hard intermediate (HIM) state and steep power law (SPL) state are roughly fitted based on the evolution of large-scale magnetic fields associated with disk accretion.
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
On the large eddy simulation of turbulent flows in complex geometry
NASA Technical Reports Server (NTRS)
Ghosal, Sandip
1993-01-01
Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy equations. The complication is caused by the fact that a filtering operation with a nonuniform filter width in general does not commute with the operation of differentiation. This is one of the issues that we have looked at in detail as it is basic to any attempt at applying LES to complex geometry flows. Our principal findings are summarized.
Wu, Junjun; Du, Guocheng; Zhou, Jingwen; Chen, Jian
2014-10-20
Flavonoids possess pharmaceutical potential due to their health-promoting activities. The complex structures of these products make extraction from plants difficult, and chemical synthesis is limited because of the use of many toxic solvents. Microbial production offers an alternate way to produce these compounds on an industrial scale in a more economical and environment-friendly manner. However, at present microbial production has been achieved only on a laboratory scale and improvements and scale-up of these processes remain challenging. Naringenin and pinocembrin, which are flavonoid scaffolds and precursors for most of the flavonoids, are the model molecules that are key to solving the current issues restricting industrial production of these chemicals. The emergence of systems metabolic engineering, which combines systems biology with synthetic biology and evolutionary engineering at the systems level, offers new perspectives on strain and process optimization. In this review, current challenges in large-scale fermentation processes involving flavonoid scaffolds and the strategies and tools of systems metabolic engineering used to overcome these challenges are summarized. This will offer insights into overcoming the limitations and challenges of large-scale microbial production of these important pharmaceutical compounds. Copyright © 2014 Elsevier B.V. All rights reserved.
Parameterization Interactions in Global Aquaplanet Simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.
2018-02-01
Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.
2012-07-01
technologies with significant capital costs, secondary waste streams, the involvement of hazardous materials, and the potential for additional worker...or environmental exposure. A more ideal technology would involve lower capital costs, would not generate secondary waste streams, would be...of bioaugmentation technology in general include low risk to human health and the environment during implementation, low secondary waste generation
ERIC Educational Resources Information Center
Grimshaw, Shirley; Wilson, Ian
2009-01-01
The aim of the project was to develop a set of online tools, systems and processes that would facilitate research at the University of Nottingham. The tools would be delivered via a portal, a one-stop place providing a Virtual Research Environment for all those involved in the research process. A predominantly bottom-up approach was used with…
Supercontinental warming of the mantle at the origin of gigantic flood basalts
NASA Astrophysics Data System (ADS)
Coltice, N.; Phillips, B. R.; Bertrand, H.; Ricard, Y.; Rey, P.
2006-12-01
Continents episodically cluster together into a supercontinent, eventually breaking up with intense magmatic activity supposedly causedby mantle plumes. The break-up of Pangea, the last supercontinent, was accompanied by the emplacement of the largest known continental flood basalt, the Central Atlantic Magmatic Province, causing massive extinctions at the Triassic/Jurassic boundary. However, there is little support for a plume origin for this catastrophic event. On the basis of 2D and 3D spherical convection modelling in a internally heated mantle, we show that continental aggregation leads to large-scale melting without requiring the involvement of plumes. When only internal heat sources in the mantle are considered, the formationof a supercontinent causes the enlargement of the wavelength of the flow and a subcontinental warming as large as 100^{\\mboxo}C. This temperature increase may lead to large- scale melting without the involvment of plumes. Our results suggest the existence of two distinct types of continental flood basalts, caused by plume or by supercontinental warming. We review some potential candidates for our proposed model.
Global warming of the mantle at the origin of flood basalts over supercontinents
NASA Astrophysics Data System (ADS)
Coltice, N.; Phillips, B. R.; Bertrand, H.; Ricard, Y.; Rey, P.
2007-05-01
Continents episodically cluster together into a supercontinent, eventually breaking up with intense magmatic activity supposedly caused by mantle plumes (Morgan, 1983; Richards et al., 1989; Condie, 2004). The breakup of Pangea, the last supercontinent, was accompanied by the emplacement of the largest known continental flood basalt, the Central Atlantic Magmatic Province, which caused massive extinctions at the Triassic-Jurassic boundary (Marzoli et al., 1999). However, there is little support for a plume origin for this catastrophic event (McHone, 2000). On the basis of convection modeling in an internally heated mantle, this paper shows that continental aggregation promotes large-scale melting without requiring the involvement of plumes. When only internal heat sources in the mantle are considered, the formation of a supercontinent causes the enlargement of flow wavelength and a subcontinental increase in temperature as large as 100 °C. This temperature increase may lead to large-scale melting without the involvement of plumes. Our results suggest the existence of two distinct types of continental flood basalts, caused by plume or by mantle global warming.
NASA Astrophysics Data System (ADS)
Rasthofer, U.; Wall, W. A.; Gravemeier, V.
2018-04-01
A novel and comprehensive computational method, referred to as the eXtended Algebraic Variational Multiscale-Multigrid-Multifractal Method (XAVM4), is proposed for large-eddy simulation of the particularly challenging problem of turbulent two-phase flow. The XAVM4 involves multifractal subgrid-scale modeling as well as a Nitsche-type extended finite element method as an approach for two-phase flow. The application of an advanced structural subgrid-scale modeling approach in conjunction with a sharp representation of the discontinuities at the interface between two bulk fluids promise high-fidelity large-eddy simulation of turbulent two-phase flow. The high potential of the XAVM4 is demonstrated for large-eddy simulation of turbulent two-phase bubbly channel flow, that is, turbulent channel flow carrying a single large bubble of the size of the channel half-width in this particular application.
An innovative large scale integration of silicon nanowire-based field effect transistors
NASA Astrophysics Data System (ADS)
Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.
2018-05-01
Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.
Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector
NASA Astrophysics Data System (ADS)
Kumar, P.; Mishra, T.; Banerjee, R.
2017-12-01
India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.
NASA Astrophysics Data System (ADS)
Blackman, Eric G.; Hubbard, Alexander
2014-08-01
Blackman and Brandenburg argued that magnetic helicity conservation in dynamo theory can in principle be captured by diagrams of mean field dynamos when the magnetic fields are represented by ribbons or tubes, but not by lines. Here, we present such a schematic ribbon diagram for the α2 dynamo that tracks magnetic helicity and provides distinct scales of large-scale magnetic helicity, small-scale magnetic helicity, and kinetic helicity involved in the process. This also motivates our construction of a new `2.5 scale' minimalist generalization of the helicity-evolving equations for the α2 dynamo that separately allows for these three distinct length-scales while keeping only two dynamical equations. We solve these equations and, as in previous studies, find that the large-scale field first grows at a rate independent of the magnetic Reynolds number RM before quenching to an RM-dependent regime. But we also show that the larger the ratio of the wavenumber where the small-scale current helicity resides to that of the forcing scale, the earlier the non-linear dynamo quenching occurs, and the weaker the large-scale field is at the turnoff from linear growth. The harmony between the theory and the schematic diagram exemplifies a general lesson that magnetic fields in magnetohydrodynamic are better visualized as two-dimensional ribbons (or pairs of lines) rather than single lines.
Law Enforcement Efforts to Control Domestically Grown Marijuana.
1984-05-25
mari- juana grown indoors , the involvement of large criminal organizations, and the patterns of domestic marijuana distribution. In response to a GAO...information is particularly important if the amount of marijuana grown indoors and the number of large-scale cultiva- tion and distribution organizations... marijuana indoors is becoming increasingly popular. A 1982 narcotics assessment by the Western States Information Network (WSIN)2 of marijuana
NASA Astrophysics Data System (ADS)
Ji, H.; Bhattacharjee, A.; Prager, S.; Daughton, W. S.; Bale, S. D.; Carter, T. A.; Crocker, N.; Drake, J. F.; Egedal, J.; Sarff, J.; Wallace, J.; Belova, E.; Ellis, R.; Fox, W. R., II; Heitzenroeder, P.; Kalish, M.; Jara-Almonte, J.; Myers, C. E.; Que, W.; Ren, Y.; Titus, P.; Yamada, M.; Yoo, J.
2014-12-01
A new intermediate-scale plasma experiment, called the Facility for Laboratory Reconnection Experiments or FLARE, is under construction at Princeton as a joint project by five universities and two national labs to study magnetic reconnection in regimes directly relevant to space, solar and astrophysical plasmas. The currently existing small-scale experiments have been focusing on the single X-line reconnection process in plasmas either with small effective sizes or at low Lundquist numbers, both of which are typically very large in natural plasmas. These new regimes involve multiple X-lines as guided by a reconnection "phase diagram", in which different coupling mechanisms from the global system scale to the local dissipation scale are classified into different reconnection phases [H. Ji & W. Daughton, Phys. Plasmas 18, 111207 (2011)]. The design of the FLARE device is based on the existing Magnetic Reconnection Experiment (MRX) at Princeton (http://mrx.pppl.gov) and is to provide experimental access to the new phases involving multiple X-lines at large effective sizes and high Lundquist numbers, directly relevant to space and solar plasmas. The motivating major physics questions, the construction status, and the planned collaborative research especially with space and solar research communities will be discussed.
NASA Astrophysics Data System (ADS)
Ji, Hantao; Bhattacharjee, A.; Prager, S.; Daughton, W.; Bale, Stuart D.; Carter, T.; Crocker, N.; Drake, J.; Egedal, J.; Sarff, J.; Fox, W.; Jara-Almonte, J.; Myers, C.; Ren, Y.; Yamada, M.; Yoo, J.
2015-04-01
A new intermediate-scale plasma experiment, called the Facility for Laboratory Reconnection Experiments or FLARE (flare.pppl.gov), is under construction at Princeton as a joint project by five universities and two national labs to study magnetic reconnection in regimes directly relevant to heliophysical and astrophysical plasmas. The currently existing small-scale experiments have been focusing on the single X-line reconnection process in plasmas either with small effective sizes or at low Lundquist numbers, both of which are typically very large in natural plasmas. These new regimes involve multiple X-lines as guided by a reconnection "phase diagram", in which different coupling mechanisms from the global system scale to the local dissipation scale are classified into different reconnection phases [H. Ji & W. Daughton, Phys. Plasmas 18, 111207 (2011)]. The design of the FLARE device is based on the existing Magnetic Reconnection Experiment (MRX) (mrx.pppl.gov) and is to provide experimental access to the new phases involving multiple X-lines at large effective sizes and high Lundquist numbers, directly relevant to magnetospheric, solar wind, and solar coronal plasmas. After a brief summary of recent laboratory results on the topic of magnetic reconnection, the motivating major physics questions, the construction status, and the planned collaborative research especially with heliophysics communities will be discussed.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Temporal transferability of soil moisture calibration equations
USDA-ARS?s Scientific Manuscript database
Several large-scale field campaigns have been conducted over the last 20 years that require accurate estimates of soil moisture conditions. These measurements are manually conducted using soil moisture probes which require calibration. The calibration process involves the collection of hundreds of...
ERIC Educational Resources Information Center
Platten, Marvin R.; Williams, Larry R.
1981-01-01
This study largely replicates the findings of a previous study reported by the authors. Further research involving the physical dimension as a possible facet of general self-concept is suggested. (Author/BW)
Mass dependence of Higgs boson production at large transverse momentum through a bottom-quark loop
NASA Astrophysics Data System (ADS)
Braaten, Eric; Zhang, Hong; Zhang, Jia-Wei
2018-05-01
In the production of the Higgs through a bottom-quark loop, the transverse momentum distribution of the Higgs at large PT is complicated by its dependence on two other important scales: the bottom quark mass mb and the Higgs mass mH. A strategy for simplifying the calculation of the cross section at large PT is to calculate only the leading terms in its expansion in mb2/PT2. In this paper, we consider the bottom-quark-loop contribution to the parton process q q ¯→H +g at leading order in αs. We show that the leading power of 1 /PT2 can be expressed in the form of a factorization formula that separates the large scale PT from the scale of the masses. All the dependence on mb and mH can be factorized into a distribution amplitude for b b ¯ in the Higgs, a distribution amplitude for b b ¯ in a real gluon, and an end point contribution. The factorization formula can be used to organize the calculation of the leading terms in the expansion in mb2/PT2 so that every calculation involves at most two scales.
An Illustrative Guide to the Minerva Framework
NASA Astrophysics Data System (ADS)
Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration
2017-10-01
Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.
Highly Efficient Large-Scale Lentiviral Vector Concentration by Tandem Tangential Flow Filtration
Cooper, Aaron R.; Patel, Sanjeet; Senadheera, Shantha; Plath, Kathrin; Kohn, Donald B.; Hollis, Roger P.
2014-01-01
Large-scale lentiviral vector (LV) concentration can be inefficient and time consuming, often involving multiple rounds of filtration and centrifugation. This report describes a simpler method using two tangential flow filtration (TFF) steps to concentrate liter-scale volumes of LV supernatant, achieving in excess of 2000-fold concentration in less than 3 hours with very high recovery (>97%). Large volumes of LV supernatant can be produced easily through the use of multi-layer flasks, each having 1720 cm2 surface area and producing ~560 mL of supernatant per flask. Combining the use of such flasks and TFF greatly simplifies large-scale production of LV. As a demonstration, the method is used to produce a very high titer LV (>1010 TU/mL) and transduce primary human CD34+ hematopoietic stem/progenitor cells at high final vector concentrations with no overt toxicity. A complex LV (STEMCCA) for induced pluripotent stem cell generation is also concentrated from low initial titer and used to transduce and reprogram primary human fibroblasts with no overt toxicity. Additionally, a generalized and simple multiplexed real- time PCR assay is described for lentiviral vector titer and copy number determination. PMID:21784103
Zaehringer, Julie G; Wambugu, Grace; Kiteme, Boniface; Eckert, Sandra
2018-05-01
Africa has been heavily targeted by large-scale agricultural investments (LAIs) throughout the last decade, with scarcely known impacts on local social-ecological systems. In Kenya, a large number of LAIs were made in the region northwest of Mount Kenya. These large-scale farms produce vegetables and flowers mainly for European markets. However, land use in the region remains dominated by small-scale crop and livestock farms with less than 1 ha of land each, who produce both for their own subsistence and for the local markets. We interviewed 100 small-scale farmers living near five different LAIs to elicit their perceptions of the impacts that these LAIs have on their land use and the overall environment. Furthermore, we analyzed remotely sensed land cover and land use data to assess land use change in the vicinity of the five LAIs. While land use change did not follow a clear trend, a number of small-scale farmers did adapt their crop management to environmental changes such as a reduced river water flows and increased pests, which they attributed to the presence of LAIs. Despite the high number of open conflicts between small-scale land users and LAIs around the issue of river water abstraction, the main environmental impact, felt by almost half of the interviewed land users, was air pollution with agrochemicals sprayed on the LAIs' land. Even though only a low percentage of local land users and their household members were directly involved with LAIs, a large majority of respondents favored the presence of LAIs nearby, as they are believed to contribute to the region's overall economic development. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Commercialization Roadmap for Carbon-Negative Energy Systems
NASA Astrophysics Data System (ADS)
Sanchez, D.
2016-12-01
The Intergovernmental Panel on Climate Change (IPCC) envisages the need for large-scale deployment of net-negative CO2 emissions technologies by mid-century to meet stringent climate mitigation goals and yield a net drawdown of atmospheric carbon. Yet there are few commercial deployments of BECCS outside of niche markets, creating uncertainty about commercialization pathways and sustainability impacts at scale. This uncertainty is exacerbated by the absence of a strong policy framework, such as high carbon prices and research coordination. Here, we propose a strategy for the potential commercial deployment of BECCS. This roadmap proceeds via three steps: 1) via capture and utilization of biogenic CO2 from existing bioenergy facilities, notably ethanol fermentation, 2) via thermochemical co-conversion of biomass and fossil fuels, particularly coal, and 3) via dedicated, large-scale BECCS. Although biochemical conversion is a proven first market for BECCS, this trajectory alone is unlikely to drive commercialization of BECCS at the gigatonne scale. In contrast to biochemical conversion, thermochemical conversion of coal and biomass enables large-scale production of fuels and electricity with a wide range of carbon intensities, process efficiencies and process scales. Aside from systems integration, primarily technical barriers are involved in large-scale biomass logistics, gasification and gas cleaning. Key uncertainties around large-scale BECCS deployment are not limited to commercialization pathways; rather, they include physical constraints on biomass cultivation or CO2 storage, as well as social barriers, including public acceptance of new technologies and conceptions of renewable and fossil energy, which co-conversion systems confound. Despite sustainability risks, this commercialization strategy presents a pathway where energy suppliers, manufacturers and governments could transition from laggards to leaders in climate change mitigation efforts.
Baby, André Rolim; Santoro, Diego Monegatto; Velasco, Maria Valéria Robles; Dos Reis Serra, Cristina Helena
2008-09-01
Introducing a pharmaceutical product on the market involves several stages of research. The scale-up stage comprises the integration of previous phases of development and their integration. This phase is extremely important since many process limitations which do not appear on the small scale become significant on the transposition to a large one. Since scientific literature presents only a few reports about the characterization of emulsified systems involving their scaling-up, this research work aimed at evaluating physical properties of non-ionic and anionic emulsions during their manufacturing phases: laboratory stage and scale-up. Prototype non-ionic (glyceryl monostearate) and anionic (potassium cetyl phosphate) emulsified systems had the physical properties by the determination of the droplet size (D[4,3], mum) and rheology profile. Transposition occurred from a batch of 500-50,000g. Semi-industrial manufacturing involved distinct conditions: intensity of agitation and homogenization. Comparing the non-ionic and anionic systems, it was observed that anionic emulsifiers generated systems with smaller droplet size and higher viscosity in laboratory scale. Besides that, for the concentrations tested, augmentation of the glyceryl monostearate emulsifier content provided formulations with better physical characteristics. For systems with potassium cetyl phosphate, droplet size increased with the elevation of the emulsifier concentration, suggesting inadequate stability. The scale-up provoked more significant alterations on the rheological profile and droplet size on the anionic systems than the non-ionic.
Guidetti, P; Dulcić, J
2007-03-01
Previous studies conducted on a local scale emphasised the potential of trophic cascades in Mediterranean rocky reefs (involving predatory fish, sea urchins and macroalgae) in affecting the transition between benthic communities dominated by erected macroalgae and barrens (i.e., bare rock with partial cover of encrusting algae). Distribution patterns of fish predators of sea urchins (Diplodus sargus sargus, Diplodus vulgaris, Coris julis and Thalassoma pavo), sea urchins (Paracentrotus lividus and Arbacia lixula) and barrens, and fish predation rates upon sea urchins, were assessed in shallow (3-6m depth) sublittoral rocky reefs in the northern, central and southern sectors of the eastern Adriatic Sea, i.e., on a large spatial scale of hundreds of kilometres. No dramatic differences were observed in predatory fish density across latitude, except for a lower density of small D. sargus sargus in the northern Adriatic and an increasing density of T. pavo from north to south. P. lividus did not show any significant difference across latitude, whereas A. lixula was more abundant in the southern than in the central Adriatic. Barrens were more extended in the southern than in the central and northern sectors, and were related with sea urchin density. Fish predation upon adult sea urchins did not change on a large scale, whereas it was slightly higher in the southern sector for juveniles when predation rates of both urchins were pooled. Results show that: (1) assemblages of predatory fish and sea urchins, and barren extent change across latitude in the eastern Adriatic Sea, (2) the weak relations between predatory fish density and predation rates on urchins reveal that factors other than top-down control can be important over large scale (with the caveat that the study was conducted in fished areas) and (3) patterns of interaction among strongly interacting taxa could change on large spatial scales and the number of species involved.
Duke, Trevor; Hwaihwanje, Ilomo; Kaupa, Magdalynn; Karubi, Jonah; Panauwe, Doreen; Sa'avu, Martin; Pulsan, Francis; Prasad, Peter; Maru, Freddy; Tenambo, Henry; Kwaramb, Ambrose; Neal, Eleanor; Graham, Hamish; Izadnegahdar, Rasa
2017-06-01
Pneumonia is the largest cause of child deaths in Papua New Guinea (PNG), and hypoxaemia is the major complication causing death in childhood pneumonia, and hypoxaemia is a major factor in deaths from many other common conditions, including bronchiolitis, asthma, sepsis, malaria, trauma, perinatal problems, and obstetric emergencies. A reliable source of oxygen therapy can reduce mortality from pneumonia by up to 35%. However, in low and middle income countries throughout the world, improved oxygen systems have not been implemented at large scale in remote, difficult to access health care settings, and oxygen is often unavailable at smaller rural hospitals or district health centers which serve as the first point of referral for childhood illnesses. These hospitals are hampered by lack of reliable power, staff training and other basic services. We report the methodology of a large implementation effectiveness trial involving sustainable and renewable oxygen and power systems in 36 health facilities in remote rural areas of PNG. The methodology is a before-and after evaluation involving continuous quality improvement, and a health systems approach. We describe this model of implementation as the considerations and steps involved have wider implications in health systems in other countries. The implementation steps include: defining the criteria for where such an intervention is appropriate, assessment of power supplies and power requirements, the optimal design of a solar power system, specifications for oxygen concentrators and other oxygen equipment that will function in remote environments, installation logistics in remote settings, the role of oxygen analyzers in monitoring oxygen concentrator performance, the engineering capacity required to sustain a program at scale, clinical guidelines and training on oxygen equipment and the treatment of children with severe respiratory infection and other critical illnesses, program costs, and measurement of processes and outcomes to support continuous quality improvement. This study will evaluate the feasibility and sustainability issues in improving oxygen systems and providing reliable power on a large scale in remote rural settings in PNG, and the impact of this on child mortality from pneumonia over 3 years post-intervention. Taking a continuous quality improvement approach can be transformational for remote health services.
Detection of submicron scale cracks and other surface anomalies using positron emission tomography
Cowan, Thomas E.; Howell, Richard H.; Colmenares, Carlos A.
2004-02-17
Detection of submicron scale cracks and other mechanical and chemical surface anomalies using PET. This surface technique has sufficient sensitivity to detect single voids or pits of sub-millimeter size and single cracks or fissures of millimeter size; and single cracks or fissures of millimeter-scale length, micrometer-scale depth, and nanometer-scale length, micrometer-scale depth, and nanometer-scale width. This technique can also be applied to detect surface regions of differing chemical reactivity. It may be utilized in a scanning or survey mode to simultaneously detect such mechanical or chemical features over large interior or exterior surface areas of parts as large as about 50 cm in diameter. The technique involves exposing a surface to short-lived radioactive gas for a time period, removing the excess gas to leave a partial monolayer, determining the location and shape of the cracks, voids, porous regions, etc., and calculating the width, depth, and length thereof. Detection of 0.01 mm deep cracks using a 3 mm detector resolution has been accomplished using this technique.
Integrating Cloud-Computing-Specific Model into Aircraft Design
NASA Astrophysics Data System (ADS)
Zhimin, Tian; Qi, Lin; Guangwen, Yang
Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.
Initial conditions and modeling for simulations of shock driven turbulent material mixing
Grinstein, Fernando F.
2016-11-17
Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarsermore » grids, tend to be preferred for faster turnaround in full-scale configurations.« less
Santangelo, Valerio
2018-01-01
Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior. PMID:29535614
Santangelo, Valerio
2018-01-01
Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior.
Analysis of BJ493 diesel engine lubrication system properties
NASA Astrophysics Data System (ADS)
Liu, F.
2017-12-01
The BJ493ZLQ4A diesel engine design is based on the primary model of BJ493ZLQ3, of which exhaust level is upgraded to the National GB5 standard due to the improved design of combustion and injection systems. Given the above changes in the diesel lubrication system, its improved properties are analyzed in this paper. According to the structures, technical parameters and indices of the lubrication system, the lubrication system model of BJ493ZLQ4A diesel engine was constructed using the Flowmaster flow simulation software. The properties of the diesel engine lubrication system, such as the oil flow rate and pressure at different rotational speeds were analyzed for the schemes involving large- and small-scale oil filters. The calculated values of the main oil channel pressure are in good agreement with the experimental results, which verifies the proposed model feasibility. The calculation results show that the main oil channel pressure and maximum oil flow rate values for the large-scale oil filter scheme satisfy the design requirements, while the small-scale scheme yields too low main oil channel’s pressure and too high. Therefore, application of small-scale oil filters is hazardous, and the large-scale scheme is recommended.
Tanaka, F; Wada, H; Fukui, Y; Fukushima, M
2011-08-01
Previous small-sized studies showed lower thymidylate synthase (TS) expression in adenocarcinoma of the lung, which may explain higher antitumor activity of TS-inhibiting agents such as pemetrexed. To quantitatively measure TS gene expression in a large-scale Japanese population (n = 2621) with primary lung cancer, laser-captured microdissected sections were cut from primary tumors, surrounding normal lung tissues and involved nodes. TS gene expression level in primary tumor was significantly higher than that in normal lung tissue (mean TS/β-actin, 3.4 and 1.0, respectively; P < 0.01), and TS gene expression level was further higher in involved node (mean TS/β-actin, 7.7; P < 0.01). Analyses of TS gene expression levels in primary tumor according to histologic cell type revealed that small-cell carcinoma showed highest TS expression (mean TS/β-actin, 13.8) and that squamous cell carcinoma showed higher TS expression as compared with adenocarcinoma (mean TS/β-actin, 4.3 and 2.3, respectively; P < 0.01); TS gene expression was significantly increased along with a decrease in the grade of tumor cell differentiation. There was no significant difference in TS gene expression according to any other patient characteristics including tumor progression. Lower TS expression in adenocarcinoma of the lung was confirmed in a large-scale study.
Talking About The Smokes: a large-scale, community-based participatory research project.
Couzos, Sophia; Nicholson, Anna K; Hunt, Jennifer M; Davey, Maureen E; May, Josephine K; Bennet, Pele T; Westphal, Darren W; Thomas, David P
2015-06-01
To describe the Talking About The Smokes (TATS) project according to the World Health Organization guiding principles for conducting community-based participatory research (PR) involving indigenous peoples, to assist others planning large-scale PR projects. The TATS project was initiated in Australia in 2010 as part of the International Tobacco Control Policy Evaluation Project, and surveyed a representative sample of 2522 Aboriginal and Torres Strait Islander adults to assess the impact of tobacco control policies. The PR process of the TATS project, which aimed to build partnerships to create equitable conditions for knowledge production, was mapped and summarised onto a framework adapted from the WHO principles. Processes describing consultation and approval, partnerships and research agreements, communication, funding, ethics and consent, data and benefits of the research. The TATS project involved baseline and follow-up surveys conducted in 34 Aboriginal community-controlled health services and one Torres Strait community. Consistent with the WHO PR principles, the TATS project built on community priorities and strengths through strategic partnerships from project inception, and demonstrated the value of research agreements and trusting relationships to foster shared decision making, capacity building and a commitment to Indigenous data ownership. Community-based PR methodology, by definition, needs adaptation to local settings and priorities. The TATS project demonstrates that large-scale research can be participatory, with strong Indigenous community engagement and benefits.
RNAi at work: Targeting invertebrate pests and beneficial organisms' diseases
USDA-ARS?s Scientific Manuscript database
Invertebrates present two types of large scale RNAi application opportunities: pest control and beneficial insect health. The former involves the introduction of sustainable applications to keep pest populations low, and the latter represents the challenge of keeping beneficial organisms healthy. RN...
Trinification, the hierarchy problem, and inverse seesaw neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cauet, Christophe; Paes, Heinrich; Wiesenfeldt, Soeren
2011-05-01
In minimal trinification models light neutrino masses can be generated via a radiative seesaw mechanism, where the masses of the right-handed neutrinos originate from loops involving Higgs and fermion fields at the unification scale. This mechanism is absent in models aiming at solving or ameliorating the hierarchy problem, such as low-energy supersymmetry, since the large seesaw scale disappears. In this case, neutrino masses need to be generated via a TeV-scale mechanism. In this paper, we investigate an inverse seesaw mechanism and discuss some phenomenological consequences.
Spectral enstrophy budget in a shear-less flow with turbulent/non-turbulent interface
NASA Astrophysics Data System (ADS)
Cimarelli, Andrea; Cocconi, Giacomo; Frohnapfel, Bettina; De Angelis, Elisabetta
2015-12-01
A numerical analysis of the interaction between decaying shear free turbulence and quiescent fluid is performed by means of global statistical budgets of enstrophy, both, at the single-point and two point levels. The single-point enstrophy budget allows us to recognize three physically relevant layers: a bulk turbulent region, an inhomogeneous turbulent layer, and an interfacial layer. Within these layers, enstrophy is produced, transferred, and finally destroyed while leading to a propagation of the turbulent front. These processes do not only depend on the position in the flow field but are also strongly scale dependent. In order to tackle this multi-dimensional behaviour of enstrophy in the space of scales and in physical space, we analyse the spectral enstrophy budget equation. The picture consists of an inviscid spatial cascade of enstrophy from large to small scales parallel to the interface moving towards the interface. At the interface, this phenomenon breaks, leaving place to an anisotropic cascade where large scale structures exhibit only a cascade process normal to the interface thus reducing their thickness while retaining their lengths parallel to the interface. The observed behaviour could be relevant for both the theoretical and the modelling approaches to flow with interacting turbulent/nonturbulent regions. The scale properties of the turbulent propagation mechanisms highlight that the inviscid turbulent transport is a large-scale phenomenon. On the contrary, the viscous diffusion, commonly associated with small scale mechanisms, highlights a much richer physics involving small lengths, normal to the interface, but at the same time large scales, parallel to the interface.
Policy and Administrative Issues for Large-Scale Clinical Interventions Following Disasters
Cobham, Vanessa E.; McDermott, Brett
2014-01-01
Abstract Objective: Large, programmatic mental health intervention programs for children and adolescents following disasters have become increasingly common; however, little has been written about the key goals and challenges involved. Methods: Using available data and the authors' experiences, this article reviews the factors involved in planning and implementing large-scale treatment programs following disasters. Results: These issues include funding, administration, choice of clinical targets, workforce selection, choice of treatment modalities, training, outcome monitoring, and consumer uptake. Ten factors are suggested for choosing among treatment modalities: 1) reach (providing access to the greatest number), 2) retention of patients, 3) privacy, 4) parental involvement, 5) familiarity of the modality to clinicians, 6) intensity (intervention type matches symptom acuity and impairment of patient), 7) burden to the clinician (in terms of time, travel, and inconvenience), 8) cost, 9) technology needs, and 10) effect size. Traditionally, after every new disaster, local leaders who have never done so before have had to be recruited to design, administer, and implement programs. Conclusion: As expertise in all of these areas represents a gap for most local professionals in disaster-affected areas, we propose that a central, nongovernmental agency with national or international scope be created that can consult flexibly with local leaders following disasters on both overarching and specific issues. We propose recommendations and point out areas in greatest need of innovation. PMID:24521227
Large-scale bioenergy production: how to resolve sustainability trade-offs?
NASA Astrophysics Data System (ADS)
Humpenöder, Florian; Popp, Alexander; Bodirsky, Benjamin Leon; Weindl, Isabelle; Biewald, Anne; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Klein, David; Kreidenweis, Ulrich; Müller, Christoph; Rolinski, Susanne; Stevanovic, Miodrag
2018-02-01
Large-scale 2nd generation bioenergy deployment is a key element of 1.5 °C and 2 °C transformation pathways. However, large-scale bioenergy production might have negative sustainability implications and thus may conflict with the Sustainable Development Goal (SDG) agenda. Here, we carry out a multi-criteria sustainability assessment of large-scale bioenergy crop production throughout the 21st century (300 EJ in 2100) using a global land-use model. Our analysis indicates that large-scale bioenergy production without complementary measures results in negative effects on the following sustainability indicators: deforestation, CO2 emissions from land-use change, nitrogen losses, unsustainable water withdrawals and food prices. One of our main findings is that single-sector environmental protection measures next to large-scale bioenergy production are prone to involve trade-offs among these sustainability indicators—at least in the absence of more efficient land or water resource use. For instance, if bioenergy production is accompanied by forest protection, deforestation and associated emissions (SDGs 13 and 15) decline substantially whereas food prices (SDG 2) increase. However, our study also shows that this trade-off strongly depends on the development of future food demand. In contrast to environmental protection measures, we find that agricultural intensification lowers some side-effects of bioenergy production substantially (SDGs 13 and 15) without generating new trade-offs—at least among the sustainability indicators considered here. Moreover, our results indicate that a combination of forest and water protection schemes, improved fertilization efficiency, and agricultural intensification would reduce the side-effects of bioenergy production most comprehensively. However, although our study includes more sustainability indicators than previous studies on bioenergy side-effects, our study represents only a small subset of all indicators relevant for the SDG agenda. Based on this, we argue that the development of policies for regulating externalities of large-scale bioenergy production should rely on broad sustainability assessments to discover potential trade-offs with the SDG agenda before implementation.
Compliant Robotic Structures. Part 2
1986-07-01
Nonaxially Homogeneous Stresses and Strains 44 Parametric Studies 52 % References 65 III. LARGE DEFLECTIONS OF CONTINUOUS ELASTIC ’- STRUCTURES 66...APPENDIX C: Computer Program for the Element String 133 -° SUMMARY This is the second year report which is a part of a three- year study on compliant...ratios as high as 10/1 for laboratory-scale models and up to 3/1 for full-scale prototype arms. The first two years of this study have involved the
2015-12-02
simplification of the equations but at the expense of introducing modeling errors. We have shown that the Wick solutions have accuracy comparable to...the system of equations for the coefficients of formal power series solutions . Moreover, the structure of this propagator is seemingly universal, i.e...the problem of computing the numerical solution to kinetic partial differential equa- tions involving many phase variables. These types of equations
Introducing Large-Scale Innovation in Schools
NASA Astrophysics Data System (ADS)
Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.
2016-08-01
Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.
Kushniruk, A; Kaipio, J; Nieminen, M; Hyppönen, H; Lääveri, T; Nohr, C; Kanstrup, A M; Berg Christiansen, M; Kuo, M-H; Borycki, E
2014-08-15
The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems.
Kaipio, J.; Nieminen, M.; Hyppönen, H.; Lääveri, T.; Nohr, C.; Kanstrup, A. M.; Berg Christiansen, M.; Kuo, M.-H.; Borycki, E.
2014-01-01
Summary Objectives The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Methods Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. Results It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. Conclusion As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems. PMID:25123725
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
Genomics of adaptation to host-plants in herbivorous insects.
Simon, Jean-Christophe; d'Alençon, Emmanuelle; Guy, Endrick; Jacquin-Joly, Emmanuelle; Jaquiéry, Julie; Nouhaud, Pierre; Peccoud, Jean; Sugio, Akiko; Streiff, Réjane
2015-11-01
Herbivorous insects represent the most species-rich lineages of metazoans. The high rate of diversification in herbivorous insects is thought to result from their specialization to distinct host-plants, which creates conditions favorable for the build-up of reproductive isolation and speciation. These conditions rely on constraints against the optimal use of a wide range of plant species, as each must constitute a viable food resource, oviposition site and mating site for an insect. Utilization of plants involves many essential traits of herbivorous insects, as they locate and select their hosts, overcome their defenses and acquire nutrients while avoiding intoxication. Although advances in understanding insect-plant molecular interactions have been limited by the complexity of insect traits involved in host use and the lack of genomic resources and functional tools, recent studies at the molecular level, combined with large-scale genomics studies at population and species levels, are revealing the genetic underpinning of plant specialization and adaptive divergence in non-model insect herbivores. Here, we review the recent advances in the genomics of plant adaptation in hemipterans and lepidopterans, two major insect orders, each of which includes a large number of crop pests. We focus on how genomics and post-genomics have improved our understanding of the mechanisms involved in insect-plant interactions by reviewing recent molecular discoveries in sensing, feeding, digesting and detoxifying strategies. We also present the outcomes of large-scale genomics approaches aimed at identifying loci potentially involved in plant adaptation in these insects. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Maguire, Elizabeth M; Bokhour, Barbara G; Wagner, Todd H; Asch, Steven M; Gifford, Allen L; Gallagher, Thomas H; Durfee, Janet M; Martinello, Richard A; Elwy, A Rani
2016-11-11
Many healthcare organizations have developed disclosure policies for large-scale adverse events, including the Veterans Health Administration (VA). This study evaluated VA's national large-scale disclosure policy and identifies gaps and successes in its implementation. Semi-structured qualitative interviews were conducted with leaders, hospital employees, and patients at nine sites to elicit their perceptions of recent large-scale adverse events notifications and the national disclosure policy. Data were coded using the constructs of the Consolidated Framework for Implementation Research (CFIR). We conducted 97 interviews. Insights included how to handle the communication of large-scale disclosures through multiple levels of a large healthcare organization and manage ongoing communications about the event with employees. Of the 5 CFIR constructs and 26 sub-constructs assessed, seven were prominent in interviews. Leaders and employees specifically mentioned key problem areas involving 1) networks and communications during disclosure, 2) organizational culture, 3) engagement of external change agents during disclosure, and 4) a need for reflecting on and evaluating the policy implementation and disclosure itself. Patients shared 5) preferences for personal outreach by phone in place of the current use of certified letters. All interviewees discussed 6) issues with execution and 7) costs of the disclosure. CFIR analysis reveals key problem areas that need to be addresses during disclosure, including: timely communication patterns throughout the organization, establishing a supportive culture prior to implementation, using patient-approved, effective communications strategies during disclosures; providing follow-up support for employees and patients, and sharing lessons learned.
Precision agriculture in large-scale mechanized farming
USDA-ARS?s Scientific Manuscript database
Precision agriculture involves a great deal of technologies and requires additional investments of money and time, but it can be practiced at different levels depending on the specific field and crop conditions and the resources and technology services available to the farmer. If practiced properly,...
Environmentalism, Globalization and National Economies, 1980-2000
ERIC Educational Resources Information Center
Schofer, Evan; Granados, Francisco J.
2006-01-01
It is commonly assumed that environmentalism harms national economies because environmental regulations constrain economic activity and create incentives for firms to move production and investment to other countries. We point out that global environmentalism involves large-scale institutional changes that: (1) encourage new kinds of economic…
ERIC Educational Resources Information Center
Hain-Hill, Alicia; Rogers, Carl R.
1988-01-01
Presents brainstorming dialogue with Carl Rogers which was held in January of 1987, shortly before Rogers's death. Explores basic challenges involved in a large-scale, cross-cultural application of person-centered group work in South Africa. (Author)
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Christophersen, Ingrid E.; Rienstra, Michiel; Roselli, Carolina; Yin, Xiaoyan; Geelhoed, Bastiaan; Barnard, John; Lin, Honghuang; Arking, Dan E.; Smith, Albert V.; Albert, Christine M.; Chaffin, Mark; Tucker, Nathan R.; Li, Molong; Klarin, Derek; Bihlmeyer, Nathan A; Low, Siew-Kee; Weeke, Peter E.; Müller-Nurasyid, Martina; Smith, J. Gustav; Brody, Jennifer A.; Niemeijer, Maartje N.; Dörr, Marcus; Trompet, Stella; Huffman, Jennifer; Gustafsson, Stefan; Schurman, Claudia; Kleber, Marcus E.; Lyytikäinen, Leo-Pekka; Seppälä, Ilkka; Malik, Rainer; Horimoto, Andrea R. V. R.; Perez, Marco; Sinisalo, Juha; Aeschbacher, Stefanie; Thériault, Sébastien; Yao, Jie; Radmanesh, Farid; Weiss, Stefan; Teumer, Alexander; Choi, Seung Hoan; Weng, Lu-Chen; Clauss, Sebastian; Deo, Rajat; Rader, Daniel J.; Shah, Svati; Sun, Albert; Hopewell, Jemma C.; Debette, Stephanie; Chauhan, Ganesh; Yang, Qiong; Worrall, Bradford B.; Paré, Guillaume; Kamatani, Yoichiro; Hagemeijer, Yanick P.; Verweij, Niek; Siland, Joylene E.; Kubo, Michiaki; Smith, Jonathan D.; Van Wagoner, David R.; Bis, Joshua C.; Perz, Siegfried; Psaty, Bruce M.; Ridker, Paul M.; Magnani, Jared W.; Harris, Tamara B.; Launer, Lenore J.; Shoemaker, M. Benjamin; Padmanabhan, Sandosh; Haessler, Jeffrey; Bartz, Traci M.; Waldenberger, Melanie; Lichtner, Peter; Arendt, Marina; Krieger, Jose E.; Kähönen, Mika; Risch, Lorenz; Mansur, Alfredo J.; Peters, Annette; Smith, Blair H.; Lind, Lars; Scott, Stuart A.; Lu, Yingchang; Bottinger, Erwin B.; Hernesniemi, Jussi; Lindgren, Cecilia M.; Wong, Jorge; Huang, Jie; Eskola, Markku; Morris, Andrew P.; Ford, Ian; Reiner, Alex P.; Delgado, Graciela; Chen, Lin Y.; Chen, Yii-Der Ida; Sandhu, Roopinder K.; Li, Man; Boerwinkle, Eric; Eisele, Lewin; Lannfelt, Lars; Rost, Natalia; Anderson, Christopher D.; Taylor, Kent D.; Campbell, Archie; Magnusson, Patrik K.; Porteous, David; Hocking, Lynne J.; Vlachopoulou, Efthymia; Pedersen, Nancy L.; Nikus, Kjell; Orho-Melander, Marju; Hamsten, Anders; Heeringa, Jan; Denny, Joshua C.; Kriebel, Jennifer; Darbar, Dawood; Newton-Cheh, Christopher; Shaffer, Christian; Macfarlane, Peter W.; Heilmann, Stefanie; Almgren, Peter; Huang, Paul L.; Sotoodehnia, Nona; Soliman, Elsayed Z.; Uitterlinden, Andre G.; Hofman, Albert; Franco, Oscar H.; Völker, Uwe; Jöckel, Karl-Heinz; Sinner, Moritz F.; Lin, Henry J.; Guo, Xiuqing; Dichgans, Martin; Ingelsson, Erik; Kooperberg, Charles; Melander, Olle; Loos, Ruth J. F.; Laurikka, Jari; Conen, David; Rosand, Jonathan; van der Harst, Pim; Lokki, Marja-Liisa; Kathiresan, Sekar; Pereira, Alexandre; Jukema, J. Wouter; Hayward, Caroline; Rotter, Jerome I.; März, Winfried; Lehtimäki, Terho; Stricker, Bruno H.; Chung, Mina K.; Felix, Stephan B.; Gudnason, Vilmundur; Alonso, Alvaro; Roden, Dan M.; Kääb, Stefan; Chasman, Daniel I.; Heckbert, Susan R.; Benjamin, Emelia J.; Tanaka, Toshihiro; Lunetta, Kathryn L.; Lubitz, Steven A.; Ellinor, Patrick T.
2017-01-01
Atrial fibrillation affects more than 33 million people worldwide and increases the risk of stroke, heart failure, and death.1,2 Fourteen genetic loci have been associated with atrial fibrillation in European and Asian ancestry groups.3–7 To further define the genetic basis of atrial fibrillation, we performed large-scale, multi-racial meta-analyses of common and rare variant association studies. The genome-wide association studies (GWAS) included 18,398 individuals with atrial fibrillation and 91,536 referents; the exome-wide association studies (ExWAS) and rare variant association studies (RVAS) involved 22,806 cases and 132,612 referents. We identified 12 novel genetic loci that exceeded genome-wide significance, implicating genes involved in cardiac electrical and structural remodeling. Our results nearly double the number of known genetic loci for atrial fibrillation, provide insights into the molecular basis of atrial fibrillation, and may facilitate new potential targets for drug discovery.8 PMID:28416818
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Transitioning to a new nursing home: one organization's experience.
O'Brien, Kelli; Welsh, Darlene; Lundrigan, Elaine; Doyle, Anne
2013-01-01
Restructuring of long-term care in Western Health, a regional health authority within Newfoundland and Labrador, created a unique opportunity to study the widespread impacts of the transition. Staff and long-term-care residents were relocated from a variety of settings to a newly constructed facility. A plan was developed to assess the impact of relocation on staff, residents, and families. Indicators included fall rates, medication errors, complaints, media database, sick leave, overtime, injuries, and staff and family satisfaction. This article reports on the findings and lessons learned from an organizational perspective with such a large-scale transition. Some of the key findings included the necessity of premove and postmove strategies to minimize negative impacts, ongoing communication and involvement in decision making during transitions, tracking of key indicators, recognition from management regarding increased workload and stress experienced by staff, engagement of residents and families throughout the transition, and assessing the timing of large-scale relocations. These findings would be of interest to health care managers and leadership team in organizations planning large-scale changes.
The co-evolution of social institutions, demography, and large-scale human cooperation.
Powers, Simon T; Lehmann, Laurent
2013-11-01
Human cooperation is typically coordinated by institutions, which determine the outcome structure of the social interactions individuals engage in. Explaining the Neolithic transition from small- to large-scale societies involves understanding how these institutions co-evolve with demography. We study this using a demographically explicit model of institution formation in a patch-structured population. Each patch supports both social and asocial niches. Social individuals create an institution, at a cost to themselves, by negotiating how much of the costly public good provided by cooperators is invested into sanctioning defectors. The remainder of their public good is invested in technology that increases carrying capacity, such as irrigation systems. We show that social individuals can invade a population of asocials, and form institutions that support high levels of cooperation. We then demonstrate conditions where the co-evolution of cooperation, institutions, and demographic carrying capacity creates a transition from small- to large-scale social groups. © 2013 John Wiley & Sons Ltd/CNRS.
A large-scale, long-term study of scale drift: The micro view and the macro view
NASA Astrophysics Data System (ADS)
He, W.; Li, S.; Kingsbury, G. G.
2016-11-01
The development of measurement scales for use across years and grades in educational settings provides unique challenges, as instructional approaches, instructional materials, and content standards all change periodically. This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years. In order to investigate the stability of these scales, item responses were collected from a large set of students who took operational adaptive tests using items calibrated to the measurement scales. For the four scales that were examined, item samples ranged from 2183 to 7923 items. Each item was administered to at least 500 students in each grade level, resulting in approximately 3000 responses per item. Stability was examined at the micro level analysing change in item parameter estimates that have occurred since the items were first calibrated. It was also examined at the macro level, involving groups of items and overall test scores for students. Results indicated that individual items had changes in their parameter estimates, which require further analysis and possible recalibration. At the same time, the results at the total score level indicate substantial stability in the measurement scales over the span of their use.
Lim, Chun Yi; Law, Mary; Khetani, Mary; Rosenbaum, Peter; Pollock, Nancy
2018-08-01
To estimate the psychometric properties of a culturally adapted version of the Young Children's Participation and Environment Measure (YC-PEM) for use among Singaporean families. This is a prospective cohort study. Caregivers of 151 Singaporean children with (n = 83) and without (n = 68) developmental disabilities, between 0 and 7 years, completed the YC-PEM (Singapore) questionnaire with 3 participation scales (frequency, involvement, and change desired) and 1 environment scale for three settings: home, childcare/preschool, and community. Setting-specific estimates of internal consistency, test-retest reliability, and construct validity were obtained. Internal consistency estimates varied from .59 to .92 for the participation scales and .73 to .79 for the environment scale. Test-retest reliability estimates from the YC-PEM conducted on two occasions, 2-3 weeks apart, varied from .39 to .89 for the participation scales and from .65 to .80 for the environment scale. Moderate to large differences were found in participation and perceived environmental support between children with and without a disability. YC-PEM (Singapore) scales have adequate psychometric properties except for low internal consistency for the childcare/preschool participation frequency scale and low test-retest reliability for home participation frequency scale. The YC-PEM (Singapore) may be used for population-level studies involving young children with and without developmental disabilities.
Physics implications of the diphoton excess from the perspective of renormalization group flow
Gu, Jiayin; Liu, Zhen
2016-04-06
A very plausible explanation for the recently observed diphoton excess at the 13 TeV LHC is a (pseudo)scalar with mass around 750 GeV, which couples to a gluon pair and to a photon pair through loops involving vector-like quarks (VLQs). To accommodate the observed rate, the required Yukawa couplings tend to be large. A large Yukawa coupling would rapidly run up with the scale and quickly reach the perturbativity bound, indicating that new physics, possibly with a strong dynamics origin, is near by. The case becomes stronger especially if the ATLAS observation of a large width persists. In this papermore » we study the implication on the scale of new physics from the 750 GeV diphoton excess using the method of renormalization group running with careful treatment of different contributions and perturbativity criterion. Our results suggest that the scale of new physics is generically not much larger than the TeV scale, in particular if the width of the hinted (pseudo)scalar is large. Introducing multiple copies of VLQs, lowing the VLQ masses and enlarging VLQ electric charges help reduce the required Yukawa couplings and can push the cutoff scale to higher values. Nevertheless, if the width of the 750 GeV resonance turns out to be larger than about 1 GeV, it is very hard to increase the cutoff scale beyond a few TeVs. This is a strong hint that new particles in addition to the 750 GeV resonance and the vector-like quarks should be around the TeV scale.« less
Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C
2010-09-21
We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.
Catastrophic flooding origin of shelf valley systems in the English Channel.
Gupta, Sanjeev; Collier, Jenny S; Palmer-Felgate, Andy; Potter, Graeme
2007-07-19
Megaflood events involving sudden discharges of exceptionally large volumes of water are rare, but can significantly affect landscape evolution, continental-scale drainage patterns and climate change. It has been proposed that a significant flood event eroded a network of large ancient valleys on the floor of the English Channel-the narrow seaway between England and France. This hypothesis has remained untested through lack of direct evidence, and alternative non-catastrophist ideas have been entertained for valley formation. Here we analyse a new regional bathymetric map of part of the English Channel derived from high-resolution sonar data, which shows the morphology of the valley in unprecedented detail. We observe a large bedrock-floored valley that contains a distinct assemblage of landforms, including streamlined islands and longitudinal erosional grooves, which are indicative of large-scale subaerial erosion by high-magnitude water discharges. Our observations support the megaflood model, in which breaching of a rock dam at the Dover Strait instigated catastrophic drainage of a large pro-glacial lake in the southern North Sea basin. We suggest that megaflooding provides an explanation for the permanent isolation of Britain from mainland Europe during interglacial high-sea-level stands, and consequently for patterns of early human colonisation of Britain together with the large-scale reorganization of palaeodrainage in northwest Europe.
The National Near-Road Mobile Source Air Toxics Study: Las Vegas
EPA, in collaboration with FHWA, has been involved in a large-scale monitoring research study in an effort to characterize highway vehicle emissions in a near-road environment. The pollutants of interest include particulate matter with aerodynamic diameter less than 2.5 microns ...
Student Engagement in Inclusive Classrooms
ERIC Educational Resources Information Center
Rangvid, Beatrice Schindler
2018-01-01
Using large scale survey data, I document substantial differences in behavioural engagement (defined as involvement in academic and social activities, cooperative participation in learning, and motivation and effort) and emotional engagement levels (defined as a sense of belonging and well-being at school) between students with and without special…
Leveraging Web-Based Environments for Mass Atrocity Prevention
ERIC Educational Resources Information Center
Harding, Tucker B.; Whitlock, Mark A.
2013-01-01
A growing literature exploring large-scale, identity-based political violence, including mass killing and genocide, debates the plausibility of, and prospects for, early warning and prevention. An extension of the debate involves the prospects for creating educational experiences that result in more sophisticated analytical products that enhance…
What are the low- Q and large- x boundaries of collinear QCD factorization theorems?
Moffat, E.; Melnitchouk, W.; Rogers, T. C.; ...
2017-05-26
Familiar factorized descriptions of classic QCD processes such as deeply-inelastic scattering (DIS) apply in the limit of very large hard scales, much larger than nonperturbative mass scales and other nonperturbative physical properties like intrinsic transverse momentum. Since many interesting DIS studies occur at kinematic regions where the hard scale,more » $$Q \\sim$$ 1-2 GeV, is not very much greater than the hadron masses involved, and the Bjorken scaling variable $$x_{bj}$$ is large, $$x_{bj} \\gtrsim 0.5$$, it is important to examine the boundaries of the most basic factorization assumptions and assess whether improved starting points are needed. Using an idealized field-theoretic model that contains most of the essential elements that a factorization derivation must confront, we retrace in this paper the steps of factorization approximations and compare with calculations that keep all kinematics exact. We examine the relative importance of such quantities as the target mass, light quark masses, and intrinsic parton transverse momentum, and argue that a careful accounting of parton virtuality is essential for treating power corrections to collinear factorization. Finally, we use our observations to motivate searches for new or enhanced factorization theorems specifically designed to deal with moderately low-$Q$ and large-$$x_{bj}$$ physics.« less
Dhakar, Lokesh; Gudla, Sudeep; Shan, Xuechuan; Wang, Zhiping; Tay, Francis Eng Hock; Heng, Chun-Huat; Lee, Chengkuo
2016-01-01
Triboelectric nanogenerators (TENGs) have emerged as a potential solution for mechanical energy harvesting over conventional mechanisms such as piezoelectric and electromagnetic, due to easy fabrication, high efficiency and wider choice of materials. Traditional fabrication techniques used to realize TENGs involve plasma etching, soft lithography and nanoparticle deposition for higher performance. But lack of truly scalable fabrication processes still remains a critical challenge and bottleneck in the path of bringing TENGs to commercial production. In this paper, we demonstrate fabrication of large scale triboelectric nanogenerator (LS-TENG) using roll-to-roll ultraviolet embossing to pattern polyethylene terephthalate sheets. These LS-TENGs can be used to harvest energy from human motion and vehicle motion from embedded devices in floors and roads, respectively. LS-TENG generated a power density of 62.5 mW m−2. Using roll-to-roll processing technique, we also demonstrate a large scale triboelectric pressure sensor array with pressure detection sensitivity of 1.33 V kPa−1. The large scale pressure sensor array has applications in self-powered motion tracking, posture monitoring and electronic skin applications. This work demonstrates scalable fabrication of TENGs and self-powered pressure sensor arrays, which will lead to extremely low cost and bring them closer to commercial production. PMID:26905285
Sawata, Hiroshi; Tsutani, Kiichiro
2011-06-29
Clinical investigations are important for obtaining evidence to improve medical treatment. Large-scale clinical trials with thousands of participants are particularly important for this purpose in cardiovascular diseases. Conducting large-scale clinical trials entails high research costs. This study sought to investigate global trends in large-scale clinical trials in cardiovascular diseases. We searched for trials using clinicaltrials.gov (URL: http://www.clinicaltrials.gov/) using the key words 'cardio' and 'event' in all fields on 10 April, 2010. We then selected trials with 300 or more participants examining cardiovascular diseases. The search revealed 344 trials that met our criteria. Of 344 trials, 71% were randomized controlled trials, 15% involved more than 10,000 participants, and 59% were funded by industry. In RCTs whose results were disclosed, 55% of industry-funded trials and 25% of non-industry funded trials reported statistically significant superiority over control (p = 0.012, 2-sided Fisher's exact test). Our findings highlighted concerns regarding potential bias related to funding sources, and that researchers should be aware of the importance of trial information disclosures and conflicts of interest. We should keep considering management and training regarding information disclosures and conflicts of interest for researchers. This could lead to better clinical evidence and further improvements in the development of medical treatment worldwide.
Promoting Handwashing Behavior: The Effects of Large-scale Community and School-level Interventions.
Galiani, Sebastian; Gertler, Paul; Ajzenman, Nicolas; Orsola-Vidal, Alexandra
2016-12-01
This paper analyzes a randomized experiment that uses novel strategies to promote handwashing with soap at critical points in time in Peru. It evaluates a large-scale comprehensive initiative that involved both community and school activities in addition to communication campaigns. The analysis indicates that the initiative was successful in reaching the target audience and in increasing the treated population's knowledge about appropriate handwashing behavior. These improvements translated into higher self-reported and observed handwashing with soap at critical junctures. However, no significant improvements in the health of children under the age of 5 years were observed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Smectic viral capsids and the aneurysm instability
NASA Astrophysics Data System (ADS)
Dharmavaram, S.; Rudnick, J.; Lawrence, C. M.; Bruinsma, R. F.
2018-05-01
The capsids of certain Archaea-infecting viruses undergo large shape changes, while maintaining their integrity against rupture by osmotic pressure. We propose that these capsids are in a smectic liquid crystalline state, with the capsid proteins assembling along spirals. We show that smectic capsids are intrinsically stabilized against the formation of localized bulges with non-zero Gauss curvature while still allowing for large-scale cooperative shape transformation that involves global changes in the Gauss curvature.
Scaling Effects on Materials Tribology: From Macro to Micro Scale.
Stoyanov, Pantcho; Chromik, Richard R
2017-05-18
The tribological study of materials inherently involves the interaction of surface asperities at the micro to nanoscopic length scales. This is the case for large scale engineering applications with sliding contacts, where the real area of contact is made up of small contacting asperities that make up only a fraction of the apparent area of contact. This is why researchers have sought to create idealized experiments of single asperity contacts in the field of nanotribology. At the same time, small scale engineering structures known as micro- and nano-electromechanical systems (MEMS and NEMS) have been developed, where the apparent area of contact approaches the length scale of the asperities, meaning the real area of contact for these devices may be only a few asperities. This is essentially the field of microtribology, where the contact size and/or forces involved have pushed the nature of the interaction between two surfaces towards the regime where the scale of the interaction approaches that of the natural length scale of the features on the surface. This paper provides a review of microtribology with the purpose to understand how tribological processes are different at the smaller length scales compared to macrotribology. Studies of the interfacial phenomena at the macroscopic length scales (e.g., using in situ tribometry) will be discussed and correlated with new findings and methodologies at the micro-length scale.
Scaling Effects on Materials Tribology: From Macro to Micro Scale
Stoyanov, Pantcho; Chromik, Richard R.
2017-01-01
The tribological study of materials inherently involves the interaction of surface asperities at the micro to nanoscopic length scales. This is the case for large scale engineering applications with sliding contacts, where the real area of contact is made up of small contacting asperities that make up only a fraction of the apparent area of contact. This is why researchers have sought to create idealized experiments of single asperity contacts in the field of nanotribology. At the same time, small scale engineering structures known as micro- and nano-electromechanical systems (MEMS and NEMS) have been developed, where the apparent area of contact approaches the length scale of the asperities, meaning the real area of contact for these devices may be only a few asperities. This is essentially the field of microtribology, where the contact size and/or forces involved have pushed the nature of the interaction between two surfaces towards the regime where the scale of the interaction approaches that of the natural length scale of the features on the surface. This paper provides a review of microtribology with the purpose to understand how tribological processes are different at the smaller length scales compared to macrotribology. Studies of the interfacial phenomena at the macroscopic length scales (e.g., using in situ tribometry) will be discussed and correlated with new findings and methodologies at the micro-length scale. PMID:28772909
PROBLEM OF FORMING IN A MAN-OPERATOR A HABIT OF TRACKING A MOVING TARGET,
Cybernetics stimulated the large-scale use of the method of functional analogy which makes it possible to compare technical and human activity systems...interesting and highly efficient human activity because of the psychological control factor involved in its operation. The human tracking system is
Plant-soil feedbacks and mycorrhizal type influence temperate forest population dynamics
USDA-ARS?s Scientific Manuscript database
Feedback with soil biota is a major driver of diversity within terrestrial plant communities. However, little is known about the factors regulating plant-soil feedback, which can vary from positive to negative among plant species. In a large-scale observational and experimental study involving 55 sp...
Personalised Information Services Using a Hybrid Recommendation Method Based on Usage Frequency
ERIC Educational Resources Information Center
Kim, Yong; Chung, Min Gyo
2008-01-01
Purpose: This paper seeks to describe a personal recommendation service (PRS) involving an innovative hybrid recommendation method suitable for deployment in a large-scale multimedia user environment. Design/methodology/approach: The proposed hybrid method partitions content and user into segments and executes association rule mining,…
Secondary Students' Stable and Unstable Optics Conceptions Using Contextualized Questions
ERIC Educational Resources Information Center
Chu, Hye-Eun; Treagust, David F.
2014-01-01
This study focuses on elucidating and explaining reasons for the stability of and interrelationships between students' conceptions about "Light Propagation" and "Visibility of Objects" using contextualized questions across 3 years of secondary schooling from Years 7 to 9. In a large-scale quantitative study involving 1,233…
Using a Parent Survey to Advance Knowledge about the Nature and Consequences of Fragile X Syndrome
ERIC Educational Resources Information Center
Bailey, Donald B., Jr.; Raspa, Melissa; Olmsted, Murrey G.
2010-01-01
Understanding the nature and consequences of intellectual and developmental disabilities is challenging, especially when the condition is rare, affected individuals are geographically dispersed, and/or resource constraints limit large-scale studies involving direct assessment. Surveys provide an alternative methodology for gathering information…
Plant succession and approaches to community restoration
Bruce A. Roundy
2005-01-01
The processes of vegetation change over time, or plant succession, are also the processes involved in plant community restoration. Restoration efforts attempt to use designed disturbance, seedbed preparation and sowing methods, and selection of adapted and compatible native plant materials to enhance ecological function. The large scale of wildfires and weed invasion...
The Comprehensive Project for Deprived Communitites in Israel.
ERIC Educational Resources Information Center
Goldstein, Joseph
A large-scale educational program, involving 30 settlements and neighborhoods that had been defined as suffering from deprivation, this project included a variety of reinforcement and enrichment programs. Information for a case study of the program was collected through interviews. Findings indicated that the guiding principles of the program…
Willingness to Communicate in English: A Model in the Chinese EFL Classroom Context
ERIC Educational Resources Information Center
Peng, Jian-E; Woodrow, Lindy
2010-01-01
This study involves a large-scale investigation of willingness to communicate (WTC) in Chinese English-as-a-foreign-language (EFL) classrooms. A hypothesized model integrating WTC in English, communication confidence, motivation, learner beliefs, and classroom environment was tested using structural equation modeling. Validation of the…
NASA Technical Reports Server (NTRS)
Tomsik, Thomas M.; Meyer, Michael L.
2010-01-01
This paper describes in-detail a test program that was initiated at the Glenn Research Center (GRC) involving the cryogenic densification of liquid oxygen (LO2). A large scale LO2 propellant densification system rated for 200 gpm and sized for the X-33 LO2 propellant tank, was designed, fabricated and tested at the GRC. Multiple objectives of the test program included validation of LO2 production unit hardware and characterization of densifier performance at design and transient conditions. First, performance data is presented for an initial series of LO2 densifier screening and check-out tests using densified liquid nitrogen. The second series of tests show performance data collected during LO2 densifier test operations with liquid oxygen as the densified product fluid. An overview of LO2 X-33 tanking operations and load tests with the 20,000 gallon Structural Test Article (STA) are described. Tank loading testing and the thermal stratification that occurs inside of a flight-weight launch vehicle propellant tank were investigated. These operations involved a closed-loop recirculation process of LO2 flow through the densifier and then back into the STA. Finally, in excess of 200,000 gallons of densified LO2 at 120 oR was produced with the propellant densification unit during the demonstration program, an achievement that s never been done before in the realm of large-scale cryogenic tests.
Assessing Impacts of Climate Change on Food Security Worldwide
NASA Technical Reports Server (NTRS)
Rosenzweig, Cynthia E.; Antle, John; Elliott, Joshua
2015-01-01
The combination of a warming Earth and an increasing population will likely strain the world's food systems in the coming decades. Experts involved with the Agricultural Model Intercomparison and Improvement Project (AgMIP) focus on quantifying the changes through time. AgMIP, a program begun in 2010, involves about 800 climate scientists, economists, nutritionists, information technology specialists, and crop and livestock experts. In mid-September 2015, the Aspen Global Change Institute convened an AgMIP workshop to draft plans and protocols for assessing global- and regional-scale modeling of crops, livestock, economics, and nutrition across major agricultural regions worldwide. The goal of this Coordinated Global and Regional Integrated Assessments (CGRA) project is to characterize climate effects on large- and small-scale farming systems.
Bidault, Xavier; Chaussedent, Stéphane; Blanc, Wilfried
2015-10-21
A simple transferable adaptive model is developed and it allows for the first time to simulate by molecular dynamics the separation of large phases in the MgO-SiO2 binary system, as experimentally observed and as predicted by the phase diagram, meaning that separated phases have various compositions. This is a real improvement over fixed-charge models, which are often limited to an interpretation involving the formation of pure clusters, or involving the modified random network model. Our adaptive model, efficient to reproduce known crystalline and glassy structures, allows us to track the formation of large amorphous Mg-rich Si-poor nanoparticles in an Mg-poor Si-rich matrix from a 0.1MgO-0.9SiO2 melt.
Simulations of hypervelocity impacts for asteroid deflection studies
NASA Astrophysics Data System (ADS)
Heberling, T.; Ferguson, J. M.; Gisler, G. R.; Plesko, C. S.; Weaver, R.
2016-12-01
The possibility of kinetic-impact deflection of threatening near-earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving two independent spacecraft, NASAs DART (Double Asteroid Redirection Test) and ESAs AIM (Asteroid Impact Mission). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos, at a speed of 5 to 7 km/s, is expected to alter the mutual orbit by an observable amount. The velocity imparted to the secondary depends on the geometry and dynamics of the impact, and especially on the momentum enhancement factor, conventionally called beta. We use the Los Alamos hydrocodes Rage and Pagosa to estimate beta in laboratory-scale benchmark experiments and in the large-scale asteroid deflection test. Simulations are performed in two- and three-dimensions, using a variety of equations of state and strength models for both the lab-scale and large-scale cases. This work is being performed as part of a systematic benchmarking study for the AIDA mission that includes other hydrocodes.
An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH
NASA Astrophysics Data System (ADS)
Lee, D.; Gopal, S.; Mohapatra, P.
2012-07-01
We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.
Self-organizing Large-scale Structures in Earth's Foreshock Waves
NASA Astrophysics Data System (ADS)
Ganse, U.; Pfau-Kempf, Y.; Turc, L.; Hoilijoki, S.; von Alfthan, S.; Vainio, R. O.; Palmroth, M.
2017-12-01
Earth's foreshock is populated by plasma waves in the ULF regime, assumed to be caused by wave instabilities of shock-reflected particle beams. While in-situ observation of these waves has provided plentiful data of their amplitudes, frequencies, obliquities and relation to local plasma conditions, global-scale structures are hard to grasp from observation data alone. The hybrid-Vlasov simulation system Vlasiator, designed for kinetic modeling of the Earth's magnetosphere, has been employed to study foreshock formation under radial and near-radial IMF conditions on global scales. Structures arising in the foreshock can be comprehensively studied and directly compared to observation results. Our modeling results show that foreshock waves present emergent large-scale structures, in which regions of waves with similar phase exist. At the interfaces of these regions ("spines") we observe high wave obliquity, higher beam densities and lower beam velocities than inside them. We characterize these apparently self-organizing structures through the interplay between wave- and beam properties and present the microphysical mechanisms involved in their creation.
The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications
NASA Technical Reports Server (NTRS)
Johnston, William E.
2002-01-01
With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.
Voltage Imaging of Waking Mouse Cortex Reveals Emergence of Critical Neuronal Dynamics
Scott, Gregory; Fagerholm, Erik D.; Mutoh, Hiroki; Leech, Robert; Sharp, David J.; Shew, Woodrow L.
2014-01-01
Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain. PMID:25505314
The EX-SHADWELL-Full Scale Fire Research and Test Ship
1988-01-20
If shipboard testing is necessary after the large scale land tests at China Lake, the EX-SHADWELL has a helo pad and well deck available which makes...8217 *,~. *c ’q.. ~ I b. Data acquistion system started. c. Fire started d. Data is recorded until all fire activity has ceased. 3.0 THE TEST AREA 3.1 Test...timing clocks will be started at the instant the fuel is lighted. That instant will be time zero . The time the cables become involved will be recorded
Wiese, Alexandra B; Berrocal, Veronica J; Furst, Daniel E; Seibold, James R; Merkel, Peter A; Mayes, Maureen D; Khanna, Dinesh
2014-11-01
Skin and musculoskeletal involvement are frequently present early in diffuse cutaneous systemic sclerosis (dcSSc). The current study examined the correlates for skin and musculoskeletal measures in a 1-year longitudinal observational study. Patients with dcSSc were recruited at 4 US centers and enrolled in a 1-year study. Prespecified and standardized measures included physician and patient assessments of skin involvement, modified Rodnan skin score (MRSS), durometer score, Health Assessment Questionnaire disability index, serum creatine phosphokinase, tender joint counts, and presence/absence of tendon friction rubs, small joint contractures, and large joint contractures. Additionally, physician and patient global health assessments and health-related quality of life assessments were recorded. Correlations were computed among the baseline global assessments, skin variables, and musculoskeletal variables. Using the followup physician and patient anchors, effect sizes were calculated. A total of 200 patients were studied: 75% were women, mean ± SD age was 50.0 ± 11.9 years, and mean ± SD disease duration from first non-Raynaud's phenomenon symptom was 1.6 ± 1.4 years. Physician global health assessment had large correlations with MRSS (r = 0.60) and physician-reported skin involvement visual analog scale in the last month (r = 0.74), whereas patient global assessment had large correlations with MRSS, the Short Form 36 health survey physical component scale, skin interference, and skin involvement in the last month (r = 0.37-0.72). Four of 9 skin variables had moderate to large effect sizes (0.51-1.09). Physician and patient global assessments have larger correlations with skin measures compared to musculoskeletal measures. From a clinical trial perspective, skin variables were more responsive to change than musculoskeletal variables over a 1-year period, although both provide complementary information. Copyright © 2014 by the American College of Rheumatology.
Wiese, Alexandra B.; Berrocal, Veronica J.; Furst, Daniel E.; Seibold, James R.; Merkel, Peter A.; Mayes, Maureen D.; Khanna, Dinesh
2015-01-01
Objective Skin and musculoskeletal involvement are frequently present early in diffuse cutaneous systemic sclerosis (dcSSc). The current study examined the correlates for skin and musculoskeletal measures in a 1-year longitudinal observational study. Methods Patients with dcSSc were recruited at 4 US centers and enrolled in a 1-year study. Prespecified and standardized measures included physician and patient assessments of skin involvement, modified Rodnan skin score (MRSS), durometer score, Health Assessment Questionnaire disability index, serum creatine phosphokinase, tender joint counts, and presence/absence of tendon friction rubs, small joint contractures, and large joint contractures. Additionally, physician and patient global health assessments and health-related quality of life assessments were recorded. Correlations were computed among the baseline global assessments, skin variables, and musculoskeletal variables. Using the followup physician and patient anchors, effect sizes were calculated. Results A total of 200 patients were studied: 75% were women, mean ± SD age was 50.0 ± 11.9 years, and mean ± SD disease duration from first non–Raynaud’s phenomenon symptom was 1.6 ± 1.4 years. Physician global health assessment had large correlations with MRSS (r = 0.60) and physician-reported skin involvement visual analog scale in the last month (r = 0.74), whereas patient global assessment had large correlations with MRSS, the Short Form 36 health survey physical component scale, skin interference, and skin involvement in the last month (r = 0.37–0.72). Four of 9 skin variables had moderate to large effect sizes (0.51–1.09). Conclusion Physician and patient global assessments have larger correlations with skin measures compared to musculoskeletal measures. From a clinical trial perspective, skin variables were more responsive to change than musculoskeletal variables over a 1-year period, although both provide complementary information. PMID:24692361
Duke, Trevor; Hwaihwanje, Ilomo; Kaupa, Magdalynn; Karubi, Jonah; Panauwe, Doreen; Sa’avu, Martin; Pulsan, Francis; Prasad, Peter; Maru, Freddy; Tenambo, Henry; Kwaramb, Ambrose; Neal, Eleanor; Graham, Hamish; Izadnegahdar, Rasa
2017-01-01
Background Pneumonia is the largest cause of child deaths in Papua New Guinea (PNG), and hypoxaemia is the major complication causing death in childhood pneumonia, and hypoxaemia is a major factor in deaths from many other common conditions, including bronchiolitis, asthma, sepsis, malaria, trauma, perinatal problems, and obstetric emergencies. A reliable source of oxygen therapy can reduce mortality from pneumonia by up to 35%. However, in low and middle income countries throughout the world, improved oxygen systems have not been implemented at large scale in remote, difficult to access health care settings, and oxygen is often unavailable at smaller rural hospitals or district health centers which serve as the first point of referral for childhood illnesses. These hospitals are hampered by lack of reliable power, staff training and other basic services. Methods We report the methodology of a large implementation effectiveness trial involving sustainable and renewable oxygen and power systems in 36 health facilities in remote rural areas of PNG. The methodology is a before–and after evaluation involving continuous quality improvement, and a health systems approach. We describe this model of implementation as the considerations and steps involved have wider implications in health systems in other countries. Results The implementation steps include: defining the criteria for where such an intervention is appropriate, assessment of power supplies and power requirements, the optimal design of a solar power system, specifications for oxygen concentrators and other oxygen equipment that will function in remote environments, installation logistics in remote settings, the role of oxygen analyzers in monitoring oxygen concentrator performance, the engineering capacity required to sustain a program at scale, clinical guidelines and training on oxygen equipment and the treatment of children with severe respiratory infection and other critical illnesses, program costs, and measurement of processes and outcomes to support continuous quality improvement. Conclusions This study will evaluate the feasibility and sustainability issues in improving oxygen systems and providing reliable power on a large scale in remote rural settings in PNG, and the impact of this on child mortality from pneumonia over 3 years post–intervention. Taking a continuous quality improvement approach can be transformational for remote health services. PMID:28567280
Aarons, Gregory A; Fettes, Danielle L; Hurlburt, Michael S; Palinkas, Lawrence A; Gunderson, Lara; Willging, Cathleen E; Chaffin, Mark J
2014-01-01
Implementation and scale-up of evidence-based practices (EBPs) is often portrayed as involving multiple stakeholders collaborating harmoniously in the service of a shared vision. In practice, however, collaboration is a more complex process that may involve shared and competing interests and agendas, and negotiation. The present study examined the scale-up of an EBP across an entire service system using the Interagency Collaborative Team approach. Participants were key stakeholders in a large-scale county-wide implementation of an EBP to reduce child neglect, SafeCare. Semistructured interviews and/or focus groups were conducted with 54 individuals representing diverse constituents in the service system, followed by an iterative approach to coding and analysis of transcripts. The study was conceptualized using the Exploration, Preparation, Implementation, and Sustainment framework. Although community stakeholders eventually coalesced around implementation of SafeCare, several challenges affected the implementation process. These challenges included differing organizational cultures, strategies, and approaches to collaboration; competing priorities across levels of leadership; power struggles; and role ambiguity. Each of the factors identified influenced how stakeholders approached the EBP implementation process. System-wide scale-up of EBPs involves multiple stakeholders operating in a nexus of differing agendas, priorities, leadership styles, and negotiation strategies. The term collaboration may oversimplify the multifaceted nature of the scale-up process. Implementation efforts should openly acknowledge and consider this nexus when individual stakeholders and organizations enter into EBP implementation through collaborative processes.
Aarons, Gregory A.; Fettes, Danielle; Hurlburt, Michael; Palinkas, Lawrence; Gunderson, Lara; Willging, Cathleen; Chaffin, Mark
2014-01-01
Objective Implementation and scale-up of evidence-based practices (EBPs) is often portrayed as involving multiple stakeholders collaborating harmoniously in the service of a shared vision. In practice, however, collaboration is a more complex process that may involve shared and competing interests and agendas, and negotiation. The present study examined the scale-up of an EBP across an entire service system using the Interagency Collaborative Team (ICT) approach. Methods Participants were key stakeholders in a large-scale county-wide implementation of an EBP to reduce child neglect, SafeCare®. Semi-structured interviews and/or focus groups were conducted with 54 individuals representing diverse constituents in the service system, followed by an iterative approach to coding and analysis of transcripts. The study was conceptualized using the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework. Results Although community stakeholders eventually coalesced around implementation of SafeCare, several challenges affected the implementation process. These challenges included differing organizational cultures, strategies, and approaches to collaboration, competing priorities across levels of leadership, power struggles, and role ambiguity. Each of the factors identified influenced how stakeholders approached the EBP implementation process. Conclusions System wide scale-up of EBPs involves multiple stakeholders operating in a nexus of differing agendas, priorities, leadership styles, and negotiation strategies. The term collaboration may oversimplify the multifaceted nature of the scale-up process. Implementation efforts should openly acknowledge and consider this nexus when individual stakeholders and organizations enter into EBP implementation through collaborative processes. PMID:24611580
Scale-space measures for graph topology link protein network architecture to function.
Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen
2014-06-15
The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creminelli, Paolo; Gleyzes, Jérôme; Vernizzi, Filippo
2014-06-01
The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a verymore » tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.« less
Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...
2017-02-16
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A.; Halsey, William; Dehoff, Ryan
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
Extending large-scale forest inventories to assess urban forests.
Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter
2012-03-01
Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.
Agricultural depopulation in Croatia.
Stambuk, M
1991-01-01
Trends in urban depopulation since 1945 in Yugoslavia and specifically in Croatia are analyzed. Two phases are identified: the first involved the eradication of the peasant farm under the Communist system, which resulted in a large-scale exodus from agricultural to urban-based activities. The second phase, which has lasted until the present, has two features: one is the tendency of those staying on family farms to have other employment off the farm; the other involves the likelihood of seeking employment abroad. (SUMMARY IN FRE AND GER)
Technological disasters, crisis management and leadership stress.
Weisaeth, Lars; Knudsen, Øistein; Tønnessen, Arnfinn
2002-07-01
This paper discusses how psychological stress disturbs decision making during technological crisis and disaster, and how to prevent this from happening. This is exemplified by scientific studies of a Norwegian large scale accident involving hazardous material, and of handling the far-off effects of the nuclear disaster at Chernobyl. The former constitutes an operative level of crisis management, whereas the latter involves crisis management at the strategic and political level. We conclude that stress had a negative effect on decision making in both cases.
Topham, C M; Dalziel, K
1986-01-01
[2-18O]Ribulose 5-phosphate was prepared and shown to be converted enzymically by 6-phosphogluconate dehydrogenase from sheep liver into 6-phosphogluconate with complete retention of the heavy isotope. This finding unequivocally excludes the possibility of a Schiff-base mechanism for the enzyme. The involvement of metal ions has already been excluded, and other possible mechanisms are discussed. The enzyme was purified by an improved large-scale procedure, which is briefly described. PMID:3718491
Mucci, A; Galderisi, S; Merlotti, E; Rossi, A; Rocca, P; Bucci, P; Piegari, G; Chieffi, M; Vignapiano, A; Maj, M
2015-07-01
The Brief Negative Symptom Scale (BNSS) was developed to address the main limitations of the existing scales for the assessment of negative symptoms of schizophrenia. The initial validation of the scale by the group involved in its development demonstrated good convergent and discriminant validity, and a factor structure confirming the two domains of negative symptoms (reduced emotional/verbal expression and anhedonia/asociality/avolition). However, only relatively small samples of patients with schizophrenia were investigated. Further independent validation in large clinical samples might be instrumental to the broad diffusion of the scale in clinical research. The present study aimed to examine the BNSS inter-rater reliability, convergent/discriminant validity and factor structure in a large Italian sample of outpatients with schizophrenia. Our results confirmed the excellent inter-rater reliability of the BNSS (the intraclass correlation coefficient ranged from 0.81 to 0.98 for individual items and was 0.98 for the total score). The convergent validity measures had r values from 0.62 to 0.77, while the divergent validity measures had r values from 0.20 to 0.28 in the main sample (n=912) and in a subsample without clinically significant levels of depression and extrapyramidal symptoms (n=496). The BNSS factor structure was supported in both groups. The study confirms that the BNSS is a promising measure for quantifying negative symptoms of schizophrenia in large multicenter clinical studies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Anomalies in the GRBs' distribution
NASA Astrophysics Data System (ADS)
Bagoly, Zsolt; Horvath, Istvan; Hakkila, Jon; Toth, Viktor
2015-08-01
Gamma-ray bursts (GRBs) are the most luminous objects known: they outshine their host galaxies making them ideal candidates for probing large-scale structure. Earlier, the angular distribution of different GRBs (long, intermediate and short) has been studied in detail with different methods and it has been found that the short and intermediate groups showed deviation from the full randomness at different levels (e.g. Vavrek, R., et al. 2008). However these result based only angular measurements of the BATSE experiment, without any spatial distance indicator involved.Currently we have more than 361 GRBs with measured precise position, optical afterglow and redshift, mainly due to the observations of the Swift mission. This sample is getting large enough that it its homogeneous and isotropic distribution a large scale can be checked. We have recently (Horvath, I. et al., 2014) identified a large clustering of gamma-ray bursts at redshift z ~ 2 in the general direction of the constellations of Hercules and Corona Borealis. This angular excess cannot be entirely attributed to known selection biases, making its existence due to chance unlikely. The scale on which the clustering occurs is disturbingly large, about 2-3 Gpc: the underlying distribution of matter suggested by this cluster is big enough to question standard assumptions about Universal homogeneity and isotropy.
NASA Astrophysics Data System (ADS)
Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu
2017-09-01
An essential task in evaluating global water resource and pollution problems is to obtain the optimum set of parameters in hydrological models through calibration and validation. For a large-scale watershed, single-site calibration and validation may ignore spatial heterogeneity and may not meet the needs of the entire watershed. The goal of this study is to apply a multi-site calibration and validation of the Soil andWater Assessment Tool (SWAT), using the observed flow data at three monitoring sites within the Baihe watershed of the Miyun Reservoir watershed, China. Our results indicate that the multi-site calibration parameter values are more reasonable than those obtained from single-site calibrations. These results are mainly due to significant differences in the topographic factors over the large-scale area, human activities and climate variability. The multi-site method involves the division of the large watershed into smaller watersheds, and applying the calibrated parameters of the multi-site calibration to the entire watershed. It was anticipated that this case study could provide experience of multi-site calibration in a large-scale basin, and provide a good foundation for the simulation of other pollutants in followup work in the Miyun Reservoir watershed and other similar large areas.
Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets.
Guadalupe, Tulio; Zwiers, Marcel P; Teumer, Alexander; Wittfeld, Katharina; Vasquez, Alejandro Arias; Hoogman, Martine; Hagoort, Peter; Fernandez, Guillen; Buitelaar, Jan; Hegenscheid, Katrin; Völzke, Henry; Franke, Barbara; Fisher, Simon E; Grabe, Hans J; Francks, Clyde
2014-07-01
Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10(-8) ). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries. Copyright © 2013 Wiley Periodicals, Inc.
Introduction to bioinformatics.
Can, Tolga
2014-01-01
Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.
Verschuur, Margot J; Spinhoven, Philip; Rosendaal, Frits R
2008-01-01
This study tested the hypothesis that large-scale provision of individual medical examination will reduce persistent anxiety about health and subjective health complaints after involvement in an aviation disaster with alleged exposure to hazardous chemicals. Three measurements were performed: during the medical examination, 6 weeks later during consultation with the physician and 12 weeks after the first examination. Rescue workers (n=1736) and residents (n=339) involved in the disaster participated. Standardized questionnaires on health complaints and concerns were administered. Both groups reported increased health anxiety and somatic sensitivity after 12 weeks. Residents reported more posttraumatic stress symptoms, whereas rescue workers seemed to have gained a better quality of life and were somewhat reassured. Participants who attended the consultation with the physician showed increased reassurance scores after 6 weeks, but their worries had increased again on follow-up. However, nonattendees reported more health anxiety on follow-up. More participants judged participation to have had a positive impact, instead of a negative impact, on their health. Our study does not indicate that a large-scale medical examination offered after involvement in a disaster has long-lasting reassuring effects and suggests that such examination may have counterproductive effects by sensitizing participants to health complaints.
Large deviations in the presence of cooperativity and slow dynamics
NASA Astrophysics Data System (ADS)
Whitelam, Stephen
2018-06-01
We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.
Automatic three-dimensional measurement of large-scale structure based on vision metrology.
Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng
2014-01-01
All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.
Extreme-Scale De Novo Genome Assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georganas, Evangelos; Hofmeyr, Steven; Egan, Rob
De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and themore » large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.« less
Fluctuations, ghosts, and the cosmological constant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, T.; Holdom, B.
2004-12-15
For a large region of parameter space involving the cosmological constant and mass parameters, we discuss fluctuating spacetime solutions that are effectively Minkowskian on large time and distance scales. Rapid, small amplitude oscillations in the scale factor have a frequency determined by the size of a negative cosmological constant. A field with modes of negative energy is required. If it is gravity that induces a coupling between the ghostlike and normal fields, we find that this results in stochastic rather than unstable behavior. The negative energy modes may also permit the existence of Lorentz invariant fluctuating solutions of finite energymore » density. Finally we consider higher derivative gravity theories and find oscillating metric solutions in these theories without the addition of other fields.« less
Large scale cryogenic fluid systems testing
NASA Technical Reports Server (NTRS)
1992-01-01
NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
Ip, Ryan H L; Li, W K; Leung, Kenneth M Y
2013-09-15
Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.
The graphene phonon dispersion with C12 and C13 isotopes
NASA Astrophysics Data System (ADS)
Whiteway, Eric; Bernard, Simon; Yu, Victor; Austing, D. Guy; Hilke, Michael
2013-12-01
Using very uniform large scale chemical vapor deposition grown graphene transferred onto silicon, we were able to identify 15 distinct Raman lines associated with graphene monolayers. This was possible thanks to a combination of different carbon isotopes and different Raman laser energies and extensive averaging without increasing the laser power. This allowed us to obtain a detailed experimental phonon dispersion relation for many points in the Brillouin zone. We further identified a D+D' peak corresponding to a double phonon process involving both an inter- and intra-valley phonon. In order to both eliminate substrate effects and to probe large areas, we undertook to study Raman scattering for large scale chemical vapor deposition (CVD) grown graphene using two different isotopes (C12 and C13) so that we can effectively exclude and subtract the substrate contributions, since a heavier mass downshifts only the vibrational properties, while keeping all other properties the same.
Design Aspects of the Rayleigh Convection Code
NASA Astrophysics Data System (ADS)
Featherstone, N. A.
2017-12-01
Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.
Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola
2016-01-01
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
The challenge of translating notions of ecosystem services from the theoretical arena to practical application at large scales (e.g. national) requires an interdisciplinary approach. To meet this challenge, we convened a workshop involving a broad suite of natural and social scie...
Physics and biochemical engineering: 3
NASA Astrophysics Data System (ADS)
Fairbrother, Robert; Riddle, Wendy; Fairbrother, Neil
2006-09-01
Once an antibiotic has been produced on a large scale, as described in our preceding articles, it has to be extracted and purified. Filtration and centrifugation are the two main ways of doing this, and the design of industrial processing systems is governed by simple physics involving factors such as pressure, viscosity and rotational motion.
Design for a Study of American Youth.
ERIC Educational Resources Information Center
Flanagan, John C.; And Others
Project TALENT is a large-scale, long-range educational research effort aimed at developing methods for the identification, development, and utilization of human talents, which has involved some 440,000 students in 1,353 public, private, and parochial secondary schools in all parts of the country. Data collected through teacher-administered tests,…
Process and Learning Outcomes from Remotely-Operated, Simulated, and Hands-on Student Laboratories
ERIC Educational Resources Information Center
Corter, James E.; Esche, Sven K.; Chassapis, Constantin; Ma, Jing; Nickerson, Jeffrey V.
2011-01-01
A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course. Students (N = 458) worked in small-group lab teams to perform two experiments involving stress on a cantilever beam. Each team conducted the…
Responding to Terrorism Victims: Oklahoma City and Beyond.
ERIC Educational Resources Information Center
Dinsmore, Janet
This report identifies the special measures needed to protect the rights and meet the needs of victims of a large-scale terrorist attack involving mass casualties. In particular, it demonstrates efforts required to ensure an effective response to victims' rights and their short- and long-term emotional and psychological needs as an integral part…
Investigating Sexual Abuse: Findings of a 15-Year Longitudinal Study
ERIC Educational Resources Information Center
McCormack, Bob; Kavanagh, Denise; Caffrey, Shay; Power, Anne
2005-01-01
Background: There is a lack of longitudinal large-scale studies of sexual abuse in intellectual disability services. Such studies offer opportunities to examine patterns in disclosure, investigation and outcomes, and to report on incidence and trends. Methods: All allegations of sexual abuse (n = 250) involving service users as victims or…
ERIC Educational Resources Information Center
Cheek, Kim A.
2017-01-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude…
Models of Design: Envisioning a Future Design Education
ERIC Educational Resources Information Center
Friedman, Ken
2012-01-01
This article offers a large-scale view of how design fits in the world economy today, and the role of design education in preparing designers for their economic and professional role. The current context of design involves broad-based historical changes including a major redistribution of geopolitical and industrial power from the West to the…
ERIC Educational Resources Information Center
Tuohilampi, Laura; Hannula, Markku S.; Varas, Leonor; Giaconi, Valentina; Laine, Anu; Näveri, Liisa; i Nevado, Laia Saló
2015-01-01
Large-scale studies measure mathematics-related affect using questionnaires developed by researchers in primarily English-based countries and according to Western-based theories. Influential comparative conclusions about different cultures and countries are drawn based on such measurements. However, there are certain premises involved in these…
Reprogramming Enhancers to Drive Metastasis.
Mostoslavsky, Raul; Bardeesy, Nabeel
2017-08-24
Acquired molecular changes can promote the spreading of primary tumor cells to distant tissues. In this issue of Cell, Roe et al. show that metastatic progression of pancreatic cancer involves large-scale enhancer reprogramming by Foxa1, which activates transcriptional program specifying early endodermal stem cells. Copyright © 2017 Elsevier Inc. All rights reserved.
Textual and Discoursal Resources Used in the Essay Genre in Sociology and English
ERIC Educational Resources Information Center
Bruce, Ian
2010-01-01
Research that has examined university assignment writing has varied from large-scale, inventorial surveys across disciplines to more specific, finer-grained analyses of the assignment requirements of specific disciplines. However, while such research has involved surveys of the views and expectations of faculty or the analysis of assignment tasks,…
Behavioral Effects Within and Between Individual and Group Reinforcement Procedures.
ERIC Educational Resources Information Center
Reese, Sandra C.; And Others
This paper briefly outlines the outcomes of a large-scale behavioral program, Preparation through Responsive Educational Programs (PREP), involving students with academic and social deficits from a 1350-student junior high school. Overall program effectiveness was assessed by outcome criteria of total school grades, grades in non-PREP classes,…
Design and Large-Scale Evaluation of Educational Games for Teaching Sorting Algorithms
ERIC Educational Resources Information Center
Battistella, Paulo Eduardo; von Wangenheim, Christiane Gresse; von Wangenheim, Aldo; Martina, Jean Everson
2017-01-01
The teaching of sorting algorithms is an essential topic in undergraduate computing courses. Typically the courses are taught through traditional lectures and exercises involving the implementation of the algorithms. As an alternative, this article presents the design and evaluation of three educational games for teaching Quicksort and Heapsort.…
ERIC Educational Resources Information Center
Kaspar, Roman; Hartig, Johannes
2016-01-01
The care of older people was described as involving substantial emotion-related affordances. Scholars in vocational training and nursing disagree whether emotion-related skills could be conceptualized and assessed as a professional competence. Studies on emotion work and empathy regularly neglect the multidimensionality of these phenomena and…
Implementation Blueprint and Self-Assessment: Positive Behavioral Interventions and Supports
ERIC Educational Resources Information Center
Technical Assistance Center on Positive Behavioral Interventions and Supports, 2010
2010-01-01
A "blueprint" is a guide designed to improve large-scale implementations of a specific systems or organizational approach, like School-Wide Positive Behavior Support (SWPBS). This blueprint is intended to make the conceptual theory, organizational models, and practices of SWPBS more accessible for those involved in enhancing how schools,…
ERIC Educational Resources Information Center
Taylor, Catherine G.; Meyer, Elizabeth J.; Peter, Tracey; Ristock, Janice; Short, Donn; Campbell, Christopher
2016-01-01
The Every Teacher Project involved large-scale survey research conducted to identify the beliefs, perspectives, and practices of Kindergarten to Grade 12 educators in Canadian public schools regarding lesbian, gay, bisexual, transgender, and queer (LGBTQ)-inclusive education. Comparisons are made between LGBTQ and cisgender heterosexual…
Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics
NASA Astrophysics Data System (ADS)
Marcé, R.; Armengol, J.
2009-01-01
One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a watershed-scale biogeochemical model. The rationale of this approach relies on the fact that the working unit for the nutrient in-stream processes of most watershed-scale models is the reach, the same unit used in field research based on the Nutrient Spiralling concept. Automatic calibration of the model using data from the study watershed confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Following calibration, the model was used as a heuristic tool in two ways. First, we compared the Nutrient Spiralling metrics obtained during calibration with results obtained during field-based research in the study watershed. The simulated and measured metrics were similar, suggesting that information collected at the reach scale during research based on the Nutrient Spiralling concept can be directly incorporated into models, without the problems associated with upscaling results from fine-scale studies. Second, we used results from our model to examine some patterns observed in several reports on Nutrient Spiralling metrics measured in impaired streams. Although these two exercises involve circular reasoning and, consequently, cannot validate any hypothesis, this is a powerful example of how models can work as heuristic tools to compare hypotheses and stimulate research in ecology.
Cortico-hippocampal systems involved in memory and cognition: the PMAT framework.
Ritchey, Maureen; Libby, Laura A; Ranganath, Charan
2015-01-01
In this chapter, we review evidence that the cortical pathways to the hippocampus appear to extend from two large-scale cortical systems: a posterior medial (PM) system that includes the parahippocampal cortex and retrosplenial cortex, and an anterior temporal (AT) system that includes the perirhinal cortex. This "PMAT" framework accounts for differences in the anatomical and functional connectivity of the medial temporal lobes, which may underpin differences in cognitive function between the systems. The PM and AT systems make distinct contributions to memory and to other cognitive domains, and convergent findings suggest that they are involved in processing information about contexts and items, respectively. In order to support the full complement of memory-guided behavior, the two systems must interact, and the hippocampal and ventromedial prefrontal cortex may serve as sites of integration between the two systems. We conclude that when considering the "connected hippocampus," inquiry should extend beyond the medial temporal lobes to include the large-scale cortical systems of which they are a part. © 2015 Elsevier B.V. All rights reserved.
Correlated Topic Vector for Scene Classification.
Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang
2017-07-01
Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.
Search and rescue response to a large-scale rockfall disaster.
Procter, Emily; Strapazzon, Giacomo; Balkenhol, Karla; Fop, Ernst; Faggionato, Alessandro; Mayr, Karl; Falk, Markus; Brugger, Hermann
2015-03-01
To describe the prehospital management and safety of search and rescue (SAR) teams involved in a large-scale rockfall disaster and monitor the acute and chronic health effects on personnel with severe dolomitic dust exposure. SAR personnel underwent on-site medical screening and lung function testing 3 months and 3 years after the event. The emergency dispatch center was responsible for central coordination of resources. One hundred fifty SAR members from multidisciplinary air- and ground-based teams as well as geotechnical experts were dispatched to a provisionary operation center. Acute exposure to dolomite dust with detectable silicon and magnesium concentrations was not associated with (sub)acute or chronic sequelae or a clinically significant impairment in lung function in exposed personnel. The risk for personnel involved in mountain SAR operations is rarely reported and not easily investigated or quantified. This case exemplifies the importance of a multiskilled team and additional considerations for prehospital management during natural hazard events. Safety plans should include compulsory protective measures and medical monitoring of personnel. Copyright © 2015 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wehner, Michael; Pall, Pardeep; Zarzycki, Colin; Stone, Daithi
2016-04-01
Probabilistic extreme event attribution is especially difficult for weather events that are caused by extremely rare large-scale meteorological patterns. Traditional modeling techniques have involved using ensembles of climate models, either fully coupled or with prescribed ocean and sea ice. Ensemble sizes for the latter case ranges from several 100 to tens of thousand. However, even if the simulations are constrained by the observed ocean state, the requisite large-scale meteorological pattern may not occur frequently enough or even at all in free running climate model simulations. We present a method to ensure that simulated events similar to the observed event are modeled with enough fidelity that robust statistics can be determined given the large scale meteorological conditions. By initializing suitably constrained short term ensemble hindcasts of both the actual weather system and a counterfactual weather system where the human interference in the climate system is removed, the human contribution to the magnitude of the event can be determined. However, the change (if any) in the probability of an event of the observed magnitude is conditional not only on the state of the ocean/sea ice system but also on the prescribed initial conditions determined by the causal large scale meteorological pattern. We will discuss the implications of this technique through two examples; the 2013 Colorado flood and the 2014 Typhoon Haiyan.
Chen, Fei-Fei; Yang, Zi-Yue; Zhu, Ying-Jie; Xiong, Zhi-Chao; Dong, Li-Ying; Lu, Bing-Qiang; Wu, Jin; Yang, Ri-Long
2018-01-09
To date, the scaled-up production and large-area applications of superhydrophobic coatings are limited because of complicated procedures, environmentally harmful fluorinated compounds, restrictive substrates, expensive equipment, and raw materials usually involved in the fabrication process. Herein, the facile, low-cost, and green production of superhydrophobic coatings based on hydroxyapatite nanowire bundles (HNBs) is reported. Hydrophobic HNBs are synthesised by using a one-step solvothermal method with oleic acid as the structure-directing and hydrophobic agent. During the reaction process, highly hydrophobic C-H groups of oleic acid molecules can be attached in situ to the surface of HNBs through the chelate interaction between Ca 2+ ions and carboxylic groups. This facile synthetic method allows the scaled-up production of HNBs up to about 8 L, which is the largest production scale of superhydrophobic paint based on HNBs ever reported. In addition, the design of the 100 L reaction system is also shown. The HNBs can be coated on any substrate with an arbitrary shape by the spray-coating technique. The self-cleaning ability in air and oil, high-temperature stability, and excellent mechanical durability of the as-prepared superhydrophobic coatings are demonstrated. More importantly, the HNBs are coated on large-sized practical objects to form large-area superhydrophobic coatings. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Coalescence computations for large samples drawn from populations of time-varying sizes
Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek
2017-01-01
We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404
NASA Technical Reports Server (NTRS)
Ward, William R.; Rudy, Donald J.
1991-01-01
The large-scale oscillations generated by the obliquity of Mars through spin-axis and orbit-plane precessions constitute basic climate system drivers with periodicities of 100,000 yrs in differential spin axis-orbit precession rates and of over 1 million yrs in amplitude modulations due to orbital-inclination changes. Attention is presently given to a third time-scale for climate change, which involves a possible spin-spin resonance and whose mechanism operates on a 10-million-yr time-scale: this effect implies an average obliquity increase for Mars of 15 deg only 5 million yrs ago, with important climatic consequences.
Lan, Hui; Carson, Rachel; Provart, Nicholas J; Bonner, Anthony J
2007-09-21
Arabidopsis thaliana is the model species of current plant genomic research with a genome size of 125 Mb and approximately 28,000 genes. The function of half of these genes is currently unknown. The purpose of this study is to infer gene function in Arabidopsis using machine-learning algorithms applied to large-scale gene expression data sets, with the goal of identifying genes that are potentially involved in plant response to abiotic stress. Using in house and publicly available data, we assembled a large set of gene expression measurements for A. thaliana. Using those genes of known function, we first evaluated and compared the ability of basic machine-learning algorithms to predict which genes respond to stress. Predictive accuracy was measured using ROC50 and precision curves derived through cross validation. To improve accuracy, we developed a method for combining these classifiers using a weighted-voting scheme. The combined classifier was then trained on genes of known function and applied to genes of unknown function, identifying genes that potentially respond to stress. Visual evidence corroborating the predictions was obtained using electronic Northern analysis. Three of the predicted genes were chosen for biological validation. Gene knockout experiments confirmed that all three are involved in a variety of stress responses. The biological analysis of one of these genes (At1g16850) is presented here, where it is shown to be necessary for the normal response to temperature and NaCl. Supervised learning methods applied to large-scale gene expression measurements can be used to predict gene function. However, the ability of basic learning methods to predict stress response varies widely and depends heavily on how much dimensionality reduction is used. Our method of combining classifiers can improve the accuracy of such predictions - in this case, predictions of genes involved in stress response in plants - and it effectively chooses the appropriate amount of dimensionality reduction automatically. The method provides a useful means of identifying genes in A. thaliana that potentially respond to stress, and we expect it would be useful in other organisms and for other gene functions.
Fields, Sherecce; Edens, John F; Smith, Shannon Toney; Rulseh, Allison; Donnellan, M Brent; Ruiz, Mark A; McDermott, Barbara E; Douglas, Kevin S
2015-12-01
Impulsivity is an important component of many forms of psychopathology. Though widely used as an index of this construct, the 30-item Barratt Impulsiveness Scale-11 (BIS-11) has demonstrated questionable psychometric properties in several research reports. An 8-item shortened version has recently been proposed, the Barratt Impulsiveness Scale-Brief (BIS-Brief) form, which was designed to overcome some of the limitations of the longer scale. In this report, we examine the internal structure and theoretically relevant external correlates of this new short form in large archival samples of individuals involved in the criminal justice system (prison inmates, substance abusers in mandatory treatment, and forensic inpatients). Confirmatory factor analysis of the BIS-Brief indicates adequate fit following a relatively minor modification. Correlations between the BIS-Brief and an array of criterion measures-other self-report scales, interview-based measures, and behavioral outcomes-are consistent with predictions and show relatively little or no decrement in predictive validity when compared with the 30-item BIS-11. Our results suggest that the BIS-Brief is a promising brief measure of impulsivity that evinces good psychometric properties across a range of offender samples. (c) 2015 APA, all rights reserved).
Stratiform clouds and their interaction with atmospheric motion
NASA Technical Reports Server (NTRS)
Clark, John H. E.; Shirer, Hampton N.
1990-01-01
During 1989 and 1990, the researchers saw the publication of two papers and the submission of a third for review on work supported primarily by the previous contract, NAS8-36150; the delivery of an invited talk at the SIAM Conference on Dynamical Systems in Orlando, Florida; and the start of two new projects on the radiative effects of stratocumulus on the large-scale flow. The published papers discuss aspects of stratocumulus circulations (Laufersweiler and Shirer, 1989) and the Hadley to Rossby regime transition in rotating spherical systems (Higgins and Shirer, 1990). The submitted paper (Haack and Shirer, 1990) discusses a new nonlinear model of roll circulations that are forced both dynamically and thermally. The invited paper by H. N. Shirer and R. Wells presented an objective means for determining appropriate truncation levels for low-order models of flows involving two incommensurate periods; this work has application to the Hadley to Rossby transition problem in quasi-geostrophic flows (Moroz and Holmes, 1984). The new projects involve the development of a multi-layered quasi-geostrophic channel model for study of the modulation of the large-scale flow by stratocumulus clouds that typically develop off the coasts of continents. In this model the diabatic forcing in the lowest layer will change in response to the (parameterized) development of extensive fields of stratocumulus clouds. To guide creation of this parameterization scheme, researchers are producing climatologies of stratocumulus frequency and the authors correlate these frequencies with the phasing and amplitude of the large-scale flow pattern. Researchers discuss the above topics in greater detail.
Weinfurt, Kevin P; Hernandez, Adrian F; Coronado, Gloria D; DeBar, Lynn L; Dember, Laura M; Green, Beverly B; Heagerty, Patrick J; Huang, Susan S; James, Kathryn T; Jarvik, Jeffrey G; Larson, Eric B; Mor, Vincent; Platt, Richard; Rosenthal, Gary E; Septimus, Edward J; Simon, Gregory E; Staman, Karen L; Sugarman, Jeremy; Vazquez, Miguel; Zatzick, Douglas; Curtis, Lesley H
2017-09-18
The clinical research enterprise is not producing the evidence decision makers arguably need in a timely and cost effective manner; research currently involves the use of labor-intensive parallel systems that are separate from clinical care. The emergence of pragmatic clinical trials (PCTs) poses a possible solution: these large-scale trials are embedded within routine clinical care and often involve cluster randomization of hospitals, clinics, primary care providers, etc. Interventions can be implemented by health system personnel through usual communication channels and quality improvement infrastructure, and data collected as part of routine clinical care. However, experience with these trials is nascent and best practices regarding design operational, analytic, and reporting methodologies are undeveloped. To strengthen the national capacity to implement cost-effective, large-scale PCTs, the Common Fund of the National Institutes of Health created the Health Care Systems Research Collaboratory (Collaboratory) to support the design, execution, and dissemination of a series of demonstration projects using a pragmatic research design. In this article, we will describe the Collaboratory, highlight some of the challenges encountered and solutions developed thus far, and discuss remaining barriers and opportunities for large-scale evidence generation using PCTs. A planning phase is critical, and even with careful planning, new challenges arise during execution; comparisons between arms can be complicated by unanticipated changes. Early and ongoing engagement with both health care system leaders and front-line clinicians is critical for success. There is also marked uncertainty when applying existing ethical and regulatory frameworks to PCTS, and using existing electronic health records for data capture adds complexity.
Moving contact lines on vibrating surfaces
NASA Astrophysics Data System (ADS)
Solomenko, Zlatko; Spelt, Peter; Scott, Julian
2017-11-01
Large-scale simulations of flows with moving contact lines for realistic conditions generally requires a subgrid scale model (analyses based on matched asymptotics) to account for the unresolved part of the flow, given the large range of length scales involved near contact lines. Existing models for the interface shape in the contact-line region are primarily for steady flows on homogeneous substrates, with encouraging results in 3D simulations. Introduction of complexities would require further investigation of the contact-line region, however. Here we study flows with moving contact lines on planar substrates subject to vibrations, with applications in controlling wetting/dewetting. The challenge here is to determine the change in interface shape near contact lines due to vibrations. To develop further insight, 2D direct numerical simulations (wherein the flow is resolved down to an imposed slip length) have been performed to enable comparison with asymptotic theory, which is also developed further. Perspectives will also be presented on the final objective of the work, which is to develop a subgrid scale model that can be utilized in large-scale simulations. The authors gratefully acknowledge the ANR for financial support (ANR-15-CE08-0031) and the meso-centre FLMSN for use of computational resources. This work was Granted access to the HPC resources of CINES under the allocation A0012B06893 made by GENCI.
[Epidemiological methods used in studies in the prevalence of Tourette syndrome].
Stefanoff, Paweł; Mazurek, Jacek
2003-01-01
Tourette syndrome (TS) prevalence was studied since the early 80-ies. Its clinical course is characterised by co-occurrence of motor and vocal tics. Results of previous epidemiological studies were surprisingly divergent: the prevalence varied from 0.5 to 115 cases per 10,000 population. The disease previously recognised as extremely rare and severe is now considered as quite common, with often moderate course. Selected methods used in studies of TS prevalence and analysis of their possible impact on study results are presented. The studies were divided into 3 groups: studies of the hospitalised population, large-scale screenings and studies involving school population, basing on characteristic and size of population, methods of selection of subjects, diagnostic and screening methods used. Studies of the hospitalised population involved patients with most severe symptoms, in different age groups, different methods of final diagnosis confirmation were used. TS prevalence varied from 0.5 up to 15 cases per 10,000 population. Procedures used in large-scale screening studies made possible the elimination of potential selection bias. Large populations were studied using transparent and repetitive confirmation of diagnoses. Their validity was additionally checked in parallel validity studies. TS prevalence was in the range 4.3 to 10 cases per 10,000 population. The highest TS prevalence was obtained in studies involving schoolchildren. Data were gathered from multiple sources: from parents, teachers and children, as well as from classroom observation. Diagnoses were made by experienced clinicians. TS prevalence obtained in school population studies was between 36.2 up to 115 per 10,000 population.
Mejias, Jorge F; Murray, John D; Kennedy, Henry; Wang, Xiao-Jing
2016-11-01
Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions.
Mejias, Jorge F.; Murray, John D.; Kennedy, Henry; Wang, Xiao-Jing
2016-01-01
Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions. PMID:28138530
EPA, in collaboration with FHWA, has been involved in a large-scale monitoring research study in an effort to characterize highway vehicle emissions in a near-road environment. The pollutants of interest include particulate matter with aerodynamic diameter less than 2.5 microns ...
PISA Lends Legitimacy: A Study of Education Policy Changes in Germany and Sweden after 2000
ERIC Educational Resources Information Center
Ringarp, Johanna
2016-01-01
School issues have become increasingly important in public elections and political debates, leading to increased focus on the results students achieve in international large-scale assessments and in the rankings of the involved countries. One of the most important studies of scholastic performance is the Programme for International Student…
ERIC Educational Resources Information Center
Rose, Chad A.; Simpson, Cynthia G.; Moss, Aaron
2015-01-01
Bullying has been the topic of much debate and empirical investigations over the past decade. Contemporary literature contends that students with disabilities may be overrepresented within the bullying dynamic as both perpetrators and victims. Unfortunately, prevalence rates associated with the representation of students with disabilities is…
Training and Scoring Issues Involved in Large-Scale Writing Assessments.
ERIC Educational Resources Information Center
Moon, Tonya R.; Hughes, Kevin R.
2002-01-01
Examined a scoring anomaly that became apparent in a state-mandated writing assessment. Results for 3,660 essays by sixth graders show that using a spiral model for training raters and scoring papers results in higher mean ratings than does using a sequential model for training and scoring. Findings demonstrate the importance of making decisions…
ERIC Educational Resources Information Center
Baynham, Mike; Hanušová, Jolana
2017-01-01
In this paper we discuss a multilingual interactional event that involves both interpreting and literacy work, part of a large scale study on translanguaging in superdiverse urban settings. In the first part of the interaction, the center/periphery dynamic is played out in what might be called "contested translanguaging" between Standard…
ERIC Educational Resources Information Center
Schlenker, Richard M.; And Others
Information is presented about the problems involved in using sea water in the steam propulsion systems of large, modern ships. Discussions supply background chemical information concerning the problems of corrosion, scale buildup, and sludge production. Suggestions are given for ways to maintain a good water treatment program to effectively deal…
Textbook Usage in the United States: The Case of U.S. History
ERIC Educational Resources Information Center
Wakefield, John F.
2006-01-01
The purpose of this presentation was to interpret the results of two large-scale assessments of textbook usage in light of criticism that textbooks are ineffective teaching/learning tools. One assessment occurred as a follow-up to a Schools and Staffing Survey involving 3,994 classroom teachers, who were asked about their classroom practices in…
James S. Clark; Louis Iverson; Christopher W. Woodall; Craig D. Allen; David M. Bell; Don C. Bragg; Anthony W. D' Amato; Frank W. Davis; Michelle H. Hersh; Ines Ibanez; Stephen T. Jackson; Stephen Matthews; Neil Pederson; Matthew Peters; Mark W. Schwartz; Kristen M. Waring; Niklaus E. Zimmermann
2016-01-01
We synthesize insights from current understanding of drought impacts at stand-to-biogeographic scales, including management options, and we identify challenges to be addressed with new research. Large stand-level shifts underway in western forests already are showing the importance of interactions involving drought, insects, and fire. Diebacks, changes in composition...
Soft Power and Hard Measures: Large-Scale Assessment, Citizenship and the European Union
ERIC Educational Resources Information Center
Rutkowski, David; Engel, Laura C.
2010-01-01
This article explores the International Civic and Citizenship Education Study (ICCS) with particular emphasis on the European Union's (EU's) involvement in the regional portion. Using the ICCS, the EU actively combines hard measures with soft power, allowing the EU to define and steer cross-national rankings of values of EU citizenship. The…
ERIC Educational Resources Information Center
Tretter, Thomas R.; Thornburgh, William R.; Duckwall, Mark
2016-01-01
Supporting elementary student understandings of ideas related to Earth's Place in the Universe (ESS1) can be challenging, especially given the large time and distance scales involved with many of the concepts. However, with effective use of crosscutting concepts and science and engineering practices, important concepts within this content domain…
Designing Professional Learning for Effecting Change: Partnerships for Local and System Networks
ERIC Educational Resources Information Center
Wyatt-Smith, Claire; Bridges, Susan; Hedemann, Maree; Neville, Mary
2008-01-01
This paper presents (i) a purpose-built conceptual model for professional learning and (ii) a leadership framework designed to support a large-scale project involving diverse sites across the state of Queensland, Australia. The project had as its focus teacher-capacity building and ways to improve literacy and numeracy outcomes for students at…
Moving Knowledge Around: Strategies for Fostering Equity within Educational Systems
ERIC Educational Resources Information Center
Ainscow, Mel
2012-01-01
This paper describes and analyses the work of a large scale improvement project in England in order to find more effective ways of fostering equity within education systems. The project involved an approach based on an analysis of local context, and used processes of networking and collaboration in order to make better use of available expertise.…
Shuang Yu: Vertical and Horizontal Dimensions of China's Extraordinary Learning Village
ERIC Educational Resources Information Center
Boshier, Roger; Huang, Yan
2007-01-01
The Chinese Communist Party has invoked the Faure report as part of a large-scale learning initiative involving 61 cities and numerous streets, neighbourhoods and villages. By embracing western ideas and infusing them with Chinese characteristics, the Central School of the Communist Party has embarked on what looks increasingly like the 5th…
Gender Differences in Processing Speed: A Review of Recent Research
ERIC Educational Resources Information Center
Roivainen, Eka
2011-01-01
A review of recent large-scale studies on gender differences in processing speed and on the cognitive factors assumed to affect processing speed was performed. It was found that females have an advantage in processing speed tasks involving digits and alphabets as well as in rapid naming tasks while males are faster on reaction time tests and…
ERIC Educational Resources Information Center
DOLBY, J.L.; AND OTHERS
THE STUDY IS CONCERNED WITH THE LINGUISTIC PROBLEM INVOLVED IN TEXT COMPRESSION--EXTRACTING, INDEXING, AND THE AUTOMATIC CREATION OF SPECIAL-PURPOSE CITATION DICTIONARIES. IN SPITE OF EARLY SUCCESS IN USING LARGE-SCALE COMPUTERS TO AUTOMATE CERTAIN HUMAN TASKS, THESE PROBLEMS REMAIN AMONG THE MOST DIFFICULT TO SOLVE. ESSENTIALLY, THE PROBLEM IS TO…
Faculty Navigating Institutional Waters: Suggestions for Bottom-Up Design of Online Programs
ERIC Educational Resources Information Center
Ferdig, Richard E.; Dawson, Kara
2006-01-01
Many faculty make the mistake of trying to start with an online degree. Administration, administrative policies and even other faculty are not necessarily ready for completely online programs. Large-scale programs are risky in the eyes of administration. Putting a program online will often involve decisions at multiple levels, months for business…
ERIC Educational Resources Information Center
Liu, Shiang-Yao; Yeh, Shin-Cheng; Liang, Shi-Wu; Fang, Wei-Ta; Tsai, Huei-Min
2015-01-01
Taiwan's government enacted the Environmental Education Act in June 2011. In the beginning of the implementation of the Act, a national assessment of schoolteachers' environmental literacy was performed in order to establish the baseline for evaluating the effectiveness of environmental education policy. This large-scale assessment involved a…
A Year of Progress in School-to-Career System Building. The Benchmark Communities Initiative.
ERIC Educational Resources Information Center
Martinez, Martha I.; And Others
This document examines the first year of Jobs for the Future's Benchmark Communities Initiative (BCI), a 5-year effort to achieve the following: large-scale systemic restructuring of K-16 educational systems; involvement of significant numbers of employers in work and learning partnerships; and development of the infrastructure necessary to…
AFRL/Cornell Information Assurance Institute
2007-03-01
revewing this colection ofinformation . Send connents regarding this burden estimate or any other aspect of this collection of information, indcudng...collabora- tions involving Cornell and AFRL researchers, with * AFRL researchers able to participate in Cornell research projects, fa- cilitating technology ...approach to developing a science base and technology for supporting large-scale reliable distributed systems. First, so- lutions to core problems were
Examining What We Mean by "Collaboration" in Collaborative Action Research: A Cross-Case Analysis
ERIC Educational Resources Information Center
Bruce, Catherine D.; Flynn, Tara; Stagg-Peterson, Shelley
2011-01-01
The purpose of this paper is to report on the nature of collaboration in a multi-year, large-scale collaborative action research project in which a teachers' federation (in Ontario, Canada), university researchers and teachers partnered to investigate teacher-selected topics for inquiry. Over two years, 14 case studies were generated involving six…
Basin-Scale Hydrologic Impacts of CO2 Storage: Regulatory and Capacity Implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birkholzer, J.T.; Zhou, Q.
Industrial-scale injection of CO{sub 2} into saline sedimentary basins will cause large-scale fluid pressurization and migration of native brines, which may affect valuable groundwater resources overlying the deep sequestration reservoirs. In this paper, we discuss how such basin-scale hydrologic impacts can (1) affect regulation of CO{sub 2} storage projects and (2) may reduce current storage capacity estimates. Our assessment arises from a hypothetical future carbon sequestration scenario in the Illinois Basin, which involves twenty individual CO{sub 2} storage projects in a core injection area suitable for long-term storage. Each project is assumed to inject five million tonnes of CO{sub 2}more » per year for 50 years. A regional-scale three-dimensional simulation model was developed for the Illinois Basin that captures both the local-scale CO{sub 2}-brine flow processes and the large-scale groundwater flow patterns in response to CO{sub 2} storage. The far-field pressure buildup predicted for this selected sequestration scenario suggests that (1) the area that needs to be characterized in a permitting process may comprise a very large region within the basin if reservoir pressurization is considered, and (2) permits cannot be granted on a single-site basis alone because the near- and far-field hydrologic response may be affected by interference between individual sites. Our results also support recent studies in that environmental concerns related to near-field and far-field pressure buildup may be a limiting factor on CO{sub 2} storage capacity. In other words, estimates of storage capacity, if solely based on the effective pore volume available for safe trapping of CO{sub 2}, may have to be revised based on assessments of pressure perturbations and their potential impact on caprock integrity and groundwater resources, respectively. We finally discuss some of the challenges in making reliable predictions of large-scale hydrologic impacts related to CO{sub 2} sequestration projects.« less
Hunt, Geoffrey; Moloney, Molly; Fazio, Adam
2012-01-01
Qualitative research is often conceptualized as inherently small-scale research, primarily conducted by a lone researcher enmeshed in extensive and long-term fieldwork or involving in-depth interviews with a small sample of 20 to 30 participants. In the study of illicit drugs, traditionally this has often been in the form of ethnographies of drug-using subcultures. Such small-scale projects have produced important interpretive scholarship that focuses on the culture and meaning of drug use in situated, embodied contexts. Larger-scale projects are often assumed to be solely the domain of quantitative researchers, using formalistic survey methods and descriptive or explanatory models. In this paper, however, we will discuss qualitative research done on a comparatively larger scale—with in-depth qualitative interviews with hundreds of young drug users. Although this work incorporates some quantitative elements into the design, data collection, and analysis, the qualitative dimension and approach has nevertheless remained central. Larger-scale qualitative research shares some of the challenges and promises of smaller-scale qualitative work including understanding drug consumption from an emic perspective, locating hard-to-reach populations, developing rapport with respondents, generating thick descriptions and a rich analysis, and examining the wider socio-cultural context as a central feature. However, there are additional challenges specific to the scale of qualitative research, which include data management, data overload and problems of handling large-scale data sets, time constraints in coding and analyzing data, and personnel issues including training, organizing and mentoring large research teams. Yet large samples can prove to be essential for enabling researchers to conduct comparative research, whether that be cross-national research within a wider European perspective undertaken by different teams or cross-cultural research looking at internal divisions and differences within diverse communities and cultures. PMID:22308079
Soil organic carbon across scales.
O'Rourke, Sharon M; Angers, Denis A; Holden, Nicholas M; McBratney, Alex B
2015-10-01
Mechanistic understanding of scale effects is important for interpreting the processes that control the global carbon cycle. Greater attention should be given to scale in soil organic carbon (SOC) science so that we can devise better policy to protect/enhance existing SOC stocks and ensure sustainable use of soils. Global issues such as climate change require consideration of SOC stock changes at the global and biosphere scale, but human interaction occurs at the landscape scale, with consequences at the pedon, aggregate and particle scales. This review evaluates our understanding of SOC across all these scales in the context of the processes involved in SOC cycling at each scale and with emphasis on stabilizing SOC. Current synergy between science and policy is explored at each scale to determine how well each is represented in the management of SOC. An outline of how SOC might be integrated into a framework of soil security is examined. We conclude that SOC processes at the biosphere to biome scales are not well understood. Instead, SOC has come to be viewed as a large-scale pool subjects to carbon flux. Better understanding exists for SOC processes operating at the scales of the pedon, aggregate and particle. At the landscape scale, the influence of large- and small-scale processes has the greatest interaction and is exposed to the greatest modification through agricultural management. Policy implemented at regional or national scale tends to focus at the landscape scale without due consideration of the larger scale factors controlling SOC or the impacts of policy for SOC at the smaller SOC scales. What is required is a framework that can be integrated across a continuum of scales to optimize SOC management. © 2015 John Wiley & Sons Ltd.
A process improvement model for software verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John; Sabolish, George
1994-01-01
We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrinan, Thomas; Leigh, Jason; Renambot, Luc
Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multi-user visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronizedmore » sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.« less
Beta decay rates of neutron-rich nuclei
NASA Astrophysics Data System (ADS)
Marketin, Tomislav; Huther, Lutz; Martínez-Pinedo, Gabriel
2015-10-01
Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. Currently, a single large-scale calculation is available based on a QRPA calculation with a schematic interaction on top of the Finite Range Droplet Model. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei.
A process improvement model for software verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John; Sabolish, George
1994-01-01
We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.
Systematic methods for defining coarse-grained maps in large biomolecules.
Zhang, Zhiyong
2015-01-01
Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.
Modelling Pulsar Glitches: The Hydrodynamics of Superfluid Vortex Avalanches in Neutron Stars
NASA Astrophysics Data System (ADS)
Khomenko, V.; Haskell, B.
2018-05-01
The dynamics of quantised vorticity in neutron star interiors is at the heart of most pulsar glitch models. However, the large number of vortices (up to ≈1013) involved in a glitch and the huge disparity in scales between the femtometre scale of vortex cores and the kilometre scale of the star makes quantum dynamical simulations of the problem computationally intractable. In this paper, we take a first step towards developing a mean field prescription to include the dynamics of vortices in large-scale hydrodynamical simulations of superfluid neutron stars. We consider a one-dimensional setup and show that vortex accumulation and differential rotation in the neutron superfluid lead to propagating waves, or `avalanches', as solutions for the equations of motion for the superfluid velocities. We introduce an additional variable, the fraction of free vortices, and test different prescriptions for its advection with the superfluid flow. We find that the new terms lead to solutions with a linear component in the rise of a glitch, and that, in specific setups, they can give rise to glitch precursors and even to decreases in frequency, or `anti-glitches'.
True polar wander on Europa from global-scale small-circle depressions.
Schenk, Paul; Matsuyama, Isamu; Nimmo, Francis
2008-05-15
The tectonic patterns and stress history of Europa are exceedingly complex and many large-scale features remain unexplained. True polar wander, involving reorientation of Europa's floating outer ice shell about the tidal axis with Jupiter, has been proposed as a possible explanation for some of the features. This mechanism is possible if the icy shell is latitudinally variable in thickness and decoupled from the rocky interior. It would impose high stress levels on the shell, leading to predictable fracture patterns. No satisfactory match to global-scale features has hitherto been found for polar wander stress patterns. Here we describe broad arcuate troughs and depressions on Europa that do not fit other proposed stress mechanisms in their current position. Using imaging from three spacecraft, we have mapped two global-scale organized concentric antipodal sets of arcuate troughs up to hundreds of kilometres long and 300 m to approximately 1.5 km deep. An excellent match to these features is found with stresses caused by an episode of approximately 80 degrees true polar wander. These depressions also appear to be geographically related to other large-scale bright and dark lineaments, suggesting that many of Europa's tectonic patterns may also be related to true polar wander.
Implementation of highly parallel and large scale GW calculations within the OpenAtom software
NASA Astrophysics Data System (ADS)
Ismail-Beigi, Sohrab
The need to describe electronic excitations with better accuracy than provided by band structures produced by Density Functional Theory (DFT) has been a long-term enterprise for the computational condensed matter and materials theory communities. In some cases, appropriate theoretical frameworks have existed for some time but have been difficult to apply widely due to computational cost. For example, the GW approximation incorporates a great deal of important non-local and dynamical electronic interaction effects but has been too computationally expensive for routine use in large materials simulations. OpenAtom is an open source massively parallel ab initiodensity functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/) that takes advantage of the Charm + + parallel framework. At present, it is developed via a three-way collaboration, funded by an NSF SI2-SSI grant (ACI-1339804), between Yale (Ismail-Beigi), IBM T. J. Watson (Glenn Martyna) and the University of Illinois at Urbana Champaign (Laxmikant Kale). We will describe the project and our current approach towards implementing large scale GW calculations with OpenAtom. Potential applications of large scale parallel GW software for problems involving electronic excitations in semiconductor and/or metal oxide systems will be also be pointed out.
Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.
NASA Astrophysics Data System (ADS)
Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.
2004-11-01
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.
NASA Astrophysics Data System (ADS)
Fanget, Alain
2009-06-01
Many authors claim that to understand the response of a propellant, specifically under quasi static and dynamic loading, the mesostructural morphology and the mechanical behaviour of each of its components have to be known. However the scale of the mechanical description of the behaviour of a propellant is relative to its heterogeneities and the wavelength of loading. The shorter it is, the more important the topological description of the material is. In our problems, involving the safety of energetic materials, the propellant can be subjected to a large spectrum of loadings. This presentation is divided into five parts. The first part describes the processes used to extract the information about the morphology of the meso-structure of the material and presents some results. The results, the difficulties and the perspectives for this part will be recalled. The second part determines the physical processes involved at this scale from experimental results. Taking into account the knowledge of the morphology, two ways have been chosen to describe the response of the material. One concerns the quasi static loading, the object of the third part, in which we show how we use the mesoscopic scale as a base of development to build constitutive models. The fourth part presents for low but dynamic loading the comparison between numerical analysis and experiments.
Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
NASA Technical Reports Server (NTRS)
Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.
1986-01-01
Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.
Saura, Santiago; Rondinini, Carlo
2016-01-01
One of the biggest challenges in large-scale conservation is quantifying connectivity at broad geographic scales and for a large set of species. Because connectivity analyses can be computationally intensive, and the planning process quite complex when multiple taxa are involved, assessing connectivity at large spatial extents for many species turns to be often intractable. Such limitation results in that conducted assessments are often partial by focusing on a few key species only, or are generic by considering a range of dispersal distances and a fixed set of areas to connect that are not directly linked to the actual spatial distribution or mobility of particular species. By using a graph theory framework, here we propose an approach to reduce computational effort and effectively consider large assemblages of species in obtaining multi-species connectivity priorities. We demonstrate the potential of the approach by identifying defragmentation priorities in the Italian road network focusing on medium and large terrestrial mammals. We show that by combining probabilistic species graphs prior to conducting the network analysis (i) it is possible to analyse connectivity once for all species simultaneously, obtaining conservation or restoration priorities that apply for the entire species assemblage; and that (ii) those priorities are well aligned with the ones that would be obtained by aggregating the results of separate connectivity analysis for each of the individual species. This approach offers great opportunities to extend connectivity assessments to large assemblages of species and broad geographic scales. PMID:27768718
Harada, Sei; Hirayama, Akiyoshi; Chan, Queenie; Kurihara, Ayako; Fukai, Kota; Iida, Miho; Kato, Suzuka; Sugiyama, Daisuke; Kuwabara, Kazuyo; Takeuchi, Ayano; Akiyama, Miki; Okamura, Tomonori; Ebbels, Timothy M D; Elliott, Paul; Tomita, Masaru; Sato, Asako; Suzuki, Chizuru; Sugimoto, Masahiro; Soga, Tomoyoshi; Takebayashi, Toru
2018-01-01
Cohort studies with metabolomics data are becoming more widespread, however, large-scale studies involving 10,000s of participants are still limited, especially in Asian populations. Therefore, we started the Tsuruoka Metabolomics Cohort Study enrolling 11,002 community-dwelling adults in Japan, and using capillary electrophoresis-mass spectrometry (CE-MS) and liquid chromatography-mass spectrometry. The CE-MS method is highly amenable to absolute quantification of polar metabolites, however, its reliability for large-scale measurement is unclear. The aim of this study is to examine reproducibility and validity of large-scale CE-MS measurements. In addition, the study presents absolute concentrations of polar metabolites in human plasma, which can be used in future as reference ranges in a Japanese population. Metabolomic profiling of 8,413 fasting plasma samples were completed using CE-MS, and 94 polar metabolites were structurally identified and quantified. Quality control (QC) samples were injected every ten samples and assessed throughout the analysis. Inter- and intra-batch coefficients of variation of QC and participant samples, and technical intraclass correlation coefficients were estimated. Passing-Bablok regression of plasma concentrations by CE-MS on serum concentrations by standard clinical chemistry assays was conducted for creatinine and uric acid. In QC samples, coefficient of variation was less than 20% for 64 metabolites, and less than 30% for 80 metabolites out of the 94 metabolites. Inter-batch coefficient of variation was less than 20% for 81 metabolites. Estimated technical intraclass correlation coefficient was above 0.75 for 67 metabolites. The slope of Passing-Bablok regression was estimated as 0.97 (95% confidence interval: 0.95, 0.98) for creatinine and 0.95 (0.92, 0.96) for uric acid. Compared to published data from other large cohort measurement platforms, reproducibility of metabolites common to the platforms was similar to or better than in the other studies. These results show that our CE-MS platform is suitable for conducting large-scale epidemiological studies.
Power suppression at large scales in string inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cicoli, Michele; Downes, Sean; Dutta, Bhaskar, E-mail: mcicoli@ictp.it, E-mail: sddownes@physics.tamu.edu, E-mail: dutta@physics.tamu.edu
2013-12-01
We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflationmore » is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.« less
Caxaj, C Susana; Berman, Helene; Ray, Susan L; Restoule, Jean-Paul; Varcoe, Coleen
2014-11-01
The influence of large-scale mining on the psychosocial wellbeing and mental health of diverse Indigenous communities has attracted increased attention. In previous reports, we have discussed the influence of a gold mining operation on the health of a community in the Western highlands of Guatemala. Here, we discuss the community strengths, and acts of resistance of this community, that is, community processes that promoted mental health amidst this context. Using an anti-colonial narrative methodology that incorporated participatory action research principles, we developed a research design in collaboration with community leaders and participants. Data collection involved focus groups, individual interviews and photo-sharing with 54 men and women between the ages of 18 and 67. Data analysis was guided by iterative and ongoing conversations with participants and McCormack's narrative lenses. Study findings revealed key mechanisms and sources of resistance, including a shared cultural identity, a spiritual knowing and being, 'defending our rights, defending our territory,' and, speaking truth to power. These overlapping strengths were identified by participants as key protective factors in facing challenges and adversity. Yet ultimately, these same strengths were often the most eroded or endangered due the influence of large-scale mining operations in the region. These community strengths and acts of resistance reveal important priorities for promoting mental health and wellbeing for populations impacted by large-scale mining operations. Mental health practitioners must attend to both the strengths and parallel vulnerabilities that may be occasioned by large-scale projects of this nature.
Power suppression at large scales in string inflation
NASA Astrophysics Data System (ADS)
Cicoli, Michele; Downes, Sean; Dutta, Bhaskar
2013-12-01
We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.
The morphing of geographical features by Fourier transformation.
Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.
Mathematical and Computational Challenges in Population Biology and Ecosystems Science
NASA Technical Reports Server (NTRS)
Levin, Simon A.; Grenfell, Bryan; Hastings, Alan; Perelson, Alan S.
1997-01-01
Mathematical and computational approaches provide powerful tools in the study of problems in population biology and ecosystems science. The subject has a rich history intertwined with the development of statistics and dynamical systems theory, but recent analytical advances, coupled with the enhanced potential of high-speed computation, have opened up new vistas and presented new challenges. Key challenges involve ways to deal with the collective dynamics of heterogeneous ensembles of individuals, and to scale from small spatial regions to large ones. The central issues-understanding how detail at one scale makes its signature felt at other scales, and how to relate phenomena across scales-cut across scientific disciplines and go to the heart of algorithmic development of approaches to high-speed computation. Examples are given from ecology, genetics, epidemiology, and immunology.
Scale-independent inflation and hierarchy generation
Ferreira, Pedro G.; Hill, Christopher T.; Ross, Graham G.
2016-10-20
We discuss models involving two scalar fields coupled to classical gravity that satisfy the general criteria: (i) the theory has no mass input parameters, (ii) classical scale symmetry is broken only throughmore » $$-\\frac{1}{12}\\varsigma \\phi^2 R$$ couplings where $$\\varsigma$$ departs from the special conformal value of $1$; (iii) the Planck mass is dynamically generated by the vacuum expectations values (VEVs) of the scalars (iv) there is a stage of viable inflation associated with slow roll in the two--scalar potential; (v) the final vacuum has a small to vanishing cosmological constant and an hierarchically small ratio of the VEVs and the ratio of the scalar masses to the Planck scale. In addition, this assumes the paradigm of classical scale symmetry as a custodial symmetry of large hierarchies.« less
Characterizing the cancer genome in lung adenocarcinoma
Weir, Barbara A.; Woo, Michele S.; Getz, Gad; Perner, Sven; Ding, Li; Beroukhim, Rameen; Lin, William M.; Province, Michael A.; Kraja, Aldi; Johnson, Laura A.; Shah, Kinjal; Sato, Mitsuo; Thomas, Roman K.; Barletta, Justine A.; Borecki, Ingrid B.; Broderick, Stephen; Chang, Andrew C.; Chiang, Derek Y.; Chirieac, Lucian R.; Cho, Jeonghee; Fujii, Yoshitaka; Gazdar, Adi F.; Giordano, Thomas; Greulich, Heidi; Hanna, Megan; Johnson, Bruce E.; Kris, Mark G.; Lash, Alex; Lin, Ling; Lindeman, Neal; Mardis, Elaine R.; McPherson, John D.; Minna, John D.; Morgan, Margaret B.; Nadel, Mark; Orringer, Mark B.; Osborne, John R.; Ozenberger, Brad; Ramos, Alex H.; Robinson, James; Roth, Jack A.; Rusch, Valerie; Sasaki, Hidefumi; Shepherd, Frances; Sougnez, Carrie; Spitz, Margaret R.; Tsao, Ming-Sound; Twomey, David; Verhaak, Roel G. W.; Weinstock, George M.; Wheeler, David A.; Winckler, Wendy; Yoshizawa, Akihiko; Yu, Soyoung; Zakowski, Maureen F.; Zhang, Qunyuan; Beer, David G.; Wistuba, Ignacio I.; Watson, Mark A.; Garraway, Levi A.; Ladanyi, Marc; Travis, William D.; Pao, William; Rubin, Mark A.; Gabriel, Stacey B.; Gibbs, Richard A.; Varmus, Harold E.; Wilson, Richard K.; Lander, Eric S.; Meyerson, Matthew
2008-01-01
Somatic alterations in cellular DNA underlie almost all human cancers1. The prospect of targeted therapies2 and the development of high-resolution, genome-wide approaches3–8 are now spurring systematic efforts to characterize cancer genomes. Here we report a large-scale project to characterize copy-number alterations in primary lung adenocarcinomas. By analysis of a large collection of tumors (n = 371) using dense single nucleotide polymorphism arrays, we identify a total of 57 significantly recurrent events. We find that 26 of 39 autosomal chromosome arms show consistent large-scale copy-number gain or loss, of which only a handful have been linked to a specific gene. We also identify 31 recurrent focal events, including 24 amplifications and 7 homozygous deletions. Only six of these focal events are currently associated with known mutations in lung carcinomas. The most common event, amplification of chromosome 14q13.3, is found in ~12% of samples. On the basis of genomic and functional analyses, we identify NKX2-1 (NK2 homeobox 1, also called TITF1), which lies in the minimal 14q13.3 amplification interval and encodes a lineage-specific transcription factor, as a novel candidate proto-oncogene involved in a significant fraction of lung adenocarcinomas. More generally, our results indicate that many of the genes that are involved in lung adenocarcinoma remain to be discovered. PMID:17982442
Zhou, Juntuo; Liu, Huiying; Liu, Yang; Liu, Jia; Zhao, Xuyang; Yin, Yuxin
2016-04-19
Recent advances in mass spectrometers which have yielded higher resolution and faster scanning speeds have expanded their application in metabolomics of diverse diseases. Using a quadrupole-Orbitrap LC-MS system, we developed an efficient large-scale quantitative method targeting 237 metabolites involved in various metabolic pathways using scheduled, parallel reaction monitoring (PRM). We assessed the dynamic range, linearity, reproducibility, and system suitability of the PRM assay by measuring concentration curves, biological samples, and clinical serum samples. The quantification performances of PRM and MS1-based assays in Q-Exactive were compared, and the MRM assay in QTRAP 6500 was also compared. The PRM assay monitoring 237 polar metabolites showed greater reproducibility and quantitative accuracy than MS1-based quantification and also showed greater flexibility in postacquisition assay refinement than the MRM assay in QTRAP 6500. We present a workflow for convenient PRM data processing using Skyline software which is free of charge. In this study we have established a reliable PRM methodology on a quadrupole-Orbitrap platform for evaluation of large-scale targeted metabolomics, which provides a new choice for basic and clinical metabolomics study.
Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.
Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi
2017-01-01
Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.
Applications of species accumulation curves in large-scale biological data analysis.
Deng, Chao; Daley, Timothy; Smith, Andrew D
2015-09-01
The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.
Applications of species accumulation curves in large-scale biological data analysis
Deng, Chao; Daley, Timothy; Smith, Andrew D
2016-01-01
The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899
Direction of information flow in large-scale resting-state networks is frequency-dependent.
Hillebrand, Arjan; Tewarie, Prejaas; van Dellen, Edwin; Yu, Meichen; Carbo, Ellen W S; Douw, Linda; Gouw, Alida A; van Straaten, Elisabeth C W; Stam, Cornelis J
2016-04-05
Normal brain function requires interactions between spatially separated, and functionally specialized, macroscopic regions, yet the directionality of these interactions in large-scale functional networks is unknown. Magnetoencephalography was used to determine the directionality of these interactions, where directionality was inferred from time series of beamformer-reconstructed estimates of neuronal activation, using a recently proposed measure of phase transfer entropy. We observed well-organized posterior-to-anterior patterns of information flow in the higher-frequency bands (alpha1, alpha2, and beta band), dominated by regions in the visual cortex and posterior default mode network. Opposite patterns of anterior-to-posterior flow were found in the theta band, involving mainly regions in the frontal lobe that were sending information to a more distributed network. Many strong information senders in the theta band were also frequent receivers in the alpha2 band, and vice versa. Our results provide evidence that large-scale resting-state patterns of information flow in the human brain form frequency-dependent reentry loops that are dominated by flow from parieto-occipital cortex to integrative frontal areas in the higher-frequency bands, which is mirrored by a theta band anterior-to-posterior flow.
Alternative Splicing of CHEK2 and Codeletion with NF2 Promote Chromosomal Instability in Meningioma1
Yang, Hong Wei; Kim, Tae-Min; Song, Sydney S; Shrinath, Nihal; Park, Richard; Kalamarides, Michel; Park, Peter J; Black, Peter M; Carroll, Rona S; Johnson, Mark D
2012-01-01
Mutations of the NF2 gene on chromosome 22q are thought to initiate tumorigenesis in nearly 50% of meningiomas, and 22q deletion is the earliest and most frequent large-scale chromosomal abnormality observed in these tumors. In aggressive meningiomas, 22q deletions are generally accompanied by the presence of large-scale segmental abnormalities involving other chromosomes, but the reasons for this association are unknown. We find that large-scale chromosomal alterations accumulate during meningioma progression primarily in tumors harboring 22q deletions, suggesting 22q-associated chromosomal instability. Here we show frequent codeletion of the DNA repair and tumor suppressor gene, CHEK2, in combination with NF2 on chromosome 22q in a majority of aggressive meningiomas. In addition, tumor-specific splicing of CHEK2 in meningioma leads to decreased functional Chk2 protein expression. We show that enforced Chk2 knockdown in meningioma cells decreases DNA repair. Furthermore, Chk2 depletion increases centrosome amplification, thereby promoting chromosomal instability. Taken together, these data indicate that alternative splicing and frequent codeletion of CHEK2 and NF2 contribute to the genomic instability and associated development of aggressive biologic behavior in meningiomas. PMID:22355270
Density-dependent clustering: I. Pulling back the curtains on motions of the BAO peak
NASA Astrophysics Data System (ADS)
Neyrinck, Mark C.; Szapudi, István; McCullagh, Nuala; Szalay, Alexander S.; Falck, Bridget; Wang, Jie
2018-05-01
The most common statistic used to analyze large-scale structure surveys is the correlation function, or power spectrum. Here, we show how `slicing' the correlation function on local density brings sensitivity to interesting non-Gaussian features in the large-scale structure, such as the expansion or contraction of baryon acoustic oscillations (BAO) according to the local density. The sliced correlation function measures the large-scale flows that smear out the BAO, instead of just correcting them as reconstruction algorithms do. Thus, we expect the sliced correlation function to be useful in constraining the growth factor, and modified gravity theories that involve the local density. Out of the studied cases, we find that the run of the BAO peak location with density is best revealed when slicing on a ˜40 h-1 Mpc filtered density. But slicing on a ˜100 h-1 Mpc filtered density may be most useful in distinguishing between underdense and overdense regions, whose BAO peaks are separated by a substantial ˜5 h-1 Mpc at z = 0. We also introduce `curtain plots' showing how local densities drive particle motions toward or away from each other over the course of an N-body simulation.
Geomorphic analysis of large alluvial rivers
NASA Astrophysics Data System (ADS)
Thorne, Colin R.
2002-05-01
Geomorphic analysis of a large river presents particular challenges and requires a systematic and organised approach because of the spatial scale and system complexity involved. This paper presents a framework and blueprint for geomorphic studies of large rivers developed in the course of basic, strategic and project-related investigations of a number of large rivers. The framework demonstrates the need to begin geomorphic studies early in the pre-feasibility stage of a river project and carry them through to implementation and post-project appraisal. The blueprint breaks down the multi-layered and multi-scaled complexity of a comprehensive geomorphic study into a number of well-defined and semi-independent topics, each of which can be performed separately to produce a clearly defined, deliverable product. Geomorphology increasingly plays a central role in multi-disciplinary river research and the importance of effective quality assurance makes it essential that audit trails and quality checks are hard-wired into study design. The structured approach presented here provides output products and production trails that can be rigorously audited, ensuring that the results of a geomorphic study can stand up to the closest scrutiny.
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
Joseph A. Tainter; Bonnie Bagley Tainter
1996-01-01
Ecosystem management should be based on the fullest possible knowledge of ecological structures and processes. In prehistoric North America, the involvement of Indian populations in ecosystem processes ranged from inadvertent alteration of the distribution and abundance of species to large-scale management of landscapes. The knowledge needed to manage ecosystems today...
ERIC Educational Resources Information Center
Towndrow, Phillip; Kwek, Dennis Beng Kiat
2017-01-01
Set against the backdrop of reinvigorating the study of literature and concerns about the adequate preparation of students for the world of work, this paper explores how a Singapore teacher presented a literary text in the classroom. Drawing on data from a large-scale representative sample of Singapore schools in instruction and assessment…
ERIC Educational Resources Information Center
Perfect, Timothy J.; Weber, Nathan
2012-01-01
Explorations of memory accuracy control normally contrast forced-report with free-report performance across a set of items and show a trade-off between memory quantity and accuracy. However, this memory control framework has not been tested with lineup identifications that may involve rejection of all alternatives. A large-scale (N = 439) lineup…
ERIC Educational Resources Information Center
Behizadeh, Nadia; Engelhard, George, Jr.
2015-01-01
In his focus article, Koretz (this issue) argues that accountability has become the primary function of large-scale testing in the United States. He then points out that tests being used for accountability purposes are flawed and that the high-stakes nature of these tests creates a context that encourages score inflation. Koretz is concerned about…
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2013-01-01
Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…
Interdisciplinary Team Science in Cell Biology.
Horwitz, Rick
2016-11-01
The cell is complex. With its multitude of components, spatial-temporal character, and gene expression diversity, it is challenging to comprehend the cell as an integrated system and to develop models that predict its behaviors. I suggest an approach to address this issue, involving system level data analysis, large scale team science, and philanthropy. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hadfield, Mark; Jopling, Michael
2014-01-01
This paper discusses the development of a model targeted at non-specialist practitioners implementing innovations that involve information and communication technology (ICT) in education. It is based on data from a national evaluation of ICT-based projects in initial teacher education, which included a large-scale questionnaire survey and six…
Callie Jo Schweitzer; Kurt W. Gottschalk; Jeff W. Stringer; Stacy L. Clark; David L. Loftis
2011-01-01
We used a large-scale silvicultural assessment designed to examine the efficacy of five stand-level prescriptions in reducing the potential impacts of gypsy moth infestations and oak decline on upland hardwood forests in Kentucky's Daniel Boone National Forest. Prescriptions involved a mix of intermediate stand treatments aimed at increasing residual tree vigor...
Robert E. Keane; Rachel A. Loehman; Lisa M. Holsinger
2011-01-01
Fire management faces important emergent issues in the coming years such as climate change, fire exclusion impacts, and wildland-urban development, so new, innovative means are needed to address these challenges. Field studies, while preferable and reliable, will be problematic because of the large time and space scales involved. Therefore, landscape simulation...
Callie J. Schweitzer; John A. Stanturf; James P. Shepard; Timothy M. Wilkins; C. Jeffery Portwood; Lamar C., Jr. Dorris
1997-01-01
In the Lower Mississippi Alluvial Valley (LMAV), restoring bottomland hardwood forests has attracted heightened interest. The impetus involves not only environmental and aesthetic benefits, but also sound economics. Financial incentives to restore forested wetlands in the LMAV can come from federal cost share programs such as the Conservation Reserve Program and the...
Visual Culture Learning Communities: How and What Students Come to Know in Informal Art Groups
ERIC Educational Resources Information Center
Freedman, Kerry; Heijnen, Emiel; Kallio-Tavin, Mira; Karpati, Andrea; Papp, Laszlo
2013-01-01
This article is the report of a large-scale, international research project involving focus group interviews of adolescent and young adult members of a variety of self-initiated visual culture groups in five urban areas (Amsterdam, Budapest, Chicago, Helsinki, and Hong Kong). Each group was established by young people around their interests in the…
Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems
NASA Astrophysics Data System (ADS)
Sikkandar Basha, Nazareen
The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These trials are continued for different requirements and at different sub-system level. The results obtained show that the Cynefin framework can be used to improve the value of the system and can be used for predictive analysis. The decision makers can use these findings and use rigorous approaches and improve the design of Large Scale Complex Engineered Systems.
Controlling high-throughput manufacturing at the nano-scale
NASA Astrophysics Data System (ADS)
Cooper, Khershed P.
2013-09-01
Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.
Structural and electron diffraction scaling of twisted graphene bilayers
NASA Astrophysics Data System (ADS)
Zhang, Kuan; Tadmor, Ellad B.
2018-03-01
Multiscale simulations are used to study the structural relaxation in twisted graphene bilayers and the associated electron diffraction patterns. The initial twist forms an incommensurate moiré pattern that relaxes to a commensurate microstructure comprised of a repeating pattern of alternating low-energy AB and BA domains surrounding a high-energy AA domain. The simulations show that the relaxation mechanism involves a localized rotation and shrinking of the AA domains that scales in two regimes with the imposed twist. For small twisting angles, the localized rotation tends to a constant; for large twist, the rotation scales linearly with it. This behavior is tied to the inverse scaling of the moiré pattern size with twist angle and is explained theoretically using a linear elasticity model. The results are validated experimentally through a simulated electron diffraction analysis of the relaxed structures. A complex electron diffraction pattern involving the appearance of weak satellite peaks is predicted for the small twist regime. This new diffraction pattern is explained using an analytical model in which the relaxation kinematics are described as an exponentially-decaying (Gaussian) rotation field centered on the AA domains. Both the angle-dependent scaling and diffraction patterns are in quantitative agreement with experimental observations. A Matlab program for extracting the Gaussian model parameters accompanies this paper.
Musical expertise is related to altered functional connectivity during audiovisual integration
Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo
2015-01-01
The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305
Evaluation of Subgrid-Scale Models for Large Eddy Simulation of Compressible Flows
NASA Technical Reports Server (NTRS)
Blaisdell, Gregory A.
1996-01-01
The objective of this project was to evaluate and develop subgrid-scale (SGS) turbulence models for large eddy simulations (LES) of compressible flows. During the first phase of the project results from LES using the dynamic SGS model were compared to those of direct numerical simulations (DNS) of compressible homogeneous turbulence. The second phase of the project involved implementing the dynamic SGS model in a NASA code for simulating supersonic flow over a flat-plate. The model has been successfully coded and a series of simulations has been completed. One of the major findings of the work is that numerical errors associated with the finite differencing scheme used in the code can overwhelm the SGS model and adversely affect the LES results. Attached to this overview are three submitted papers: 'Evaluation of the Dynamic Model for Simulations of Compressible Decaying Isotropic Turbulence'; 'The effect of the formulation of nonlinear terms on aliasing errors in spectral methods'; and 'Large-Eddy Simulation of a Spatially Evolving Compressible Boundary Layer Flow'.
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies
NASA Astrophysics Data System (ADS)
Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine
2007-03-01
We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).
Winfree, Rachael; Fox, Jeremy W; Williams, Neal M; Reilly, James R; Cariveau, Daniel P
2015-07-01
Biodiversity-ecosystem functioning experiments have established that species richness and composition are both important determinants of ecosystem function in an experimental context. Determining whether this result holds for real-world ecosystem services has remained elusive, however, largely due to the lack of analytical methods appropriate for large-scale, associational data. Here, we use a novel analytical approach, the Price equation, to partition the contribution to ecosystem services made by species richness, composition and abundance in four large-scale data sets on crop pollination by native bees. We found that abundance fluctuations of dominant species drove ecosystem service delivery, whereas richness changes were relatively unimportant because they primarily involved rare species that contributed little to function. Thus, the mechanism behind our results was the skewed species-abundance distribution. Our finding that a few common species, not species richness, drive ecosystem service delivery could have broad generality given the ubiquity of skewed species-abundance distributions in nature. © 2015 John Wiley & Sons Ltd/CNRS.
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Vertically migrating swimmers generate aggregation-scale eddies in a stratified column.
Houghton, Isabel A; Koseff, Jeffrey R; Monismith, Stephen G; Dabiri, John O
2018-04-01
Biologically generated turbulence has been proposed as an important contributor to nutrient transport and ocean mixing 1-3 . However, to produce non-negligible transport and mixing, such turbulence must produce eddies at scales comparable to the length scales of stratification in the ocean. It has previously been argued that biologically generated turbulence is limited to the scale of the individual animals involved 4 , which would make turbulence created by highly abundant centimetre-scale zooplankton such as krill irrelevant to ocean mixing. Their small size notwithstanding, zooplankton form dense aggregations tens of metres in vertical extent as they undergo diurnal vertical migration over hundreds of metres 3,5,6 . This behaviour potentially introduces additional length scales-such as the scale of the aggregation-that are of relevance to animal interactions with the surrounding water column. Here we show that the collective vertical migration of centimetre-scale swimmers-as represented by the brine shrimp Artemia salina-generates aggregation-scale eddies that mix a stable density stratification, resulting in an effective turbulent diffusivity up to three orders of magnitude larger than the molecular diffusivity of salt. These observed large-scale mixing eddies are the result of flow in the wakes of the individual organisms coalescing to form a large-scale downward jet during upward swimming, even in the presence of a strong density stratification relative to typical values observed in the ocean. The results illustrate the potential for marine zooplankton to considerably alter the physical and biogeochemical structure of the water column, with potentially widespread effects owing to their high abundance in climatically important regions of the ocean 7 .
Assessment of dynamic closure for premixed combustion large eddy simulation
NASA Astrophysics Data System (ADS)
Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan
2015-09-01
Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.
Stochastic inflation lattice simulations - Ultra-large scale structure of the universe
NASA Technical Reports Server (NTRS)
Salopek, D. S.
1991-01-01
Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.
The Crotone Megalandslide, southern Italy: Architecture, timing and tectonic control.
Zecchin, Massimo; Accaino, Flavio; Ceramicola, Silvia; Civile, Dario; Critelli, Salvatore; Da Lio, Cristina; Mangano, Giacomo; Prosser, Giacomo; Teatini, Pietro; Tosi, Luigi
2018-05-17
Large-scale submarine gravitational land movements involving even more than 1,000 m thick sedimentary successions are known as megalandslides. We prove the existence of large-scale gravitational phenomena off the Crotone Basin, a forearc basin located on the Ionian side of Calabria (southern Italy), by seismic, morpho-bathymetric and well data. Our study reveals that the Crotone Megalandslide started moving between Late Zanclean and Early Piacenzian and was triggered by a contractional tectonic event leading to the basin inversion. Seaward gliding of the megalandslide continued until roughly Late Gelasian, and then resumed since Middle Pleistocene with a modest rate. Interestingly, the onshore part of the basin does not show a gravity-driven deformation comparable to that observed in the marine area, and this peculiar evidence allows some speculations on the origin of the megalandslide.
Solar flare activity - Evidence for large-scale changes in the past
NASA Technical Reports Server (NTRS)
Zook, H. A.; Hartung, J. B.; Storzer, D.
1977-01-01
An analysis of radar and photographic meteor data and of spacecraft meteoroid-penetration data indicates that there probably has not been a large increase in meteoroid impact rates in the last 10,000 yr. The solar-flare tracks observed in the glass linings of meteoroid impact pits on lunar rock 15205 are therefore reanalyzed assuming a meteoroid flux that is constant in time. Based on this assumption, the data suggest that the production rate of Fe-group solar-flare tracks may have varied by as much as a factor of 50 on a time scale of about 10,000 yr. No independently obtained data are known to require conflict with this interpretation. Confidence in this conclusion is somewhat qualified by the experimental and analytical uncertainties involved, but the conclusion nevertheless remains the present 'best' explanation for the observed data trends.
Potential climatic impacts and reliability of large-scale offshore wind farms
NASA Astrophysics Data System (ADS)
Wang, Chien; Prinn, Ronald G.
2011-04-01
The vast availability of wind power has fueled substantial interest in this renewable energy source as a potential near-zero greenhouse gas emission technology for meeting future world energy needs while addressing the climate change issue. However, in order to provide even a fraction of the estimated future energy needs, a large-scale deployment of wind turbines (several million) is required. The consequent environmental impacts, and the inherent reliability of such a large-scale usage of intermittent wind power would have to be carefully assessed, in addition to the need to lower the high current unit wind power costs. Our previous study (Wang and Prinn 2010 Atmos. Chem. Phys. 10 2053) using a three-dimensional climate model suggested that a large deployment of wind turbines over land to meet about 10% of predicted world energy needs in 2100 could lead to a significant temperature increase in the lower atmosphere over the installed regions. A global-scale perturbation to the general circulation patterns as well as to the cloud and precipitation distribution was also predicted. In the later study reported here, we conducted a set of six additional model simulations using an improved climate model to further address the potential environmental and intermittency issues of large-scale deployment of offshore wind turbines for differing installation areas and spatial densities. In contrast to the previous land installation results, the offshore wind turbine installations are found to cause a surface cooling over the installed offshore regions. This cooling is due principally to the enhanced latent heat flux from the sea surface to lower atmosphere, driven by an increase in turbulent mixing caused by the wind turbines which was not entirely offset by the concurrent reduction of mean wind kinetic energy. We found that the perturbation of the large-scale deployment of offshore wind turbines to the global climate is relatively small compared to the case of land-based installations. However, the intermittency caused by the significant seasonal wind variations over several major offshore sites is substantial, and demands further options to ensure the reliability of large-scale offshore wind power. The method that we used to simulate the offshore wind turbine effect on the lower atmosphere involved simply increasing the ocean surface drag coefficient. While this method is consistent with several detailed fine-scale simulations of wind turbines, it still needs further study to ensure its validity. New field observations of actual wind turbine arrays are definitely required to provide ultimate validation of the model predictions presented here.
Zeng, L. W.; Singh, R. S.
1993-01-01
We have attempted to estimate the number of genes involved in postzygotic reproductive isolation between two closely related species, Drosophila simulans and Drosophila sechellia, by a novel approach that involves the use of high resolution two-dimensional gel electrophoresis (2DE) to examine testis proteins in parents, hybrids and fertile and sterile backcross progenies. The important results that have emerged from this study are as follows: (1) about 8% of about 1000 proteins examined showed divergence (presence/absence) between the two species; (2) by tracing individual proteins in parental, hybrid and backcross males, we were able to associate the divergent proteins with different chromosomes and found that most divergent proteins are associated with autosomes and very few with X chromosome, Y chromosome and cytoplasm; (3) when proteins showing both quantitative and qualitative differences between the two species were examined in F(1) hybrid males, most (97.4%) proteins were expressed at levels between the two parents and no sign of large scale changes in spot density was observed. All the proteins observed in the two parental species were present in F(1) hybrid males except two species-specific proteins that may be encoded (or regulated) by sex chromosomes; (4) when different fertile and sterile backcross male testes were compared, a few D. sechellia-specific proteins were identified to be consistently associated with male sterility. These results along with the observation that a large proportion (23.6%) of first generation backcross males were fertile show that hybrid male sterility between D. simulans and D. sechellia involves a relatively small number of genes. Role of large scale genetic changes due to general genome incompatibility is not supported. The results also suggest that the large effect of X chromosome on hybrid male sterility is not due to higher divergence of X chromosome than autosomes. PMID:8224814
Zeng, L W; Singh, R S
1993-09-01
We have attempted to estimate the number of genes involved in postzygotic reproductive isolation between two closely related species, Drosophila simulans and Drosophila sechellia, by a novel approach that involves the use of high resolution two-dimensional gel electrophoresis (2DE) to examine testis proteins in parents, hybrids and fertile and sterile backcross progenies. The important results that have emerged from this study are as follows: (1) about 8% of about 1000 proteins examined showed divergence (presence/absence) between the two species; (2) by tracing individual proteins in parental, hybrid and backcross males, we were able to associate the divergent proteins with different chromosomes and found that most divergent proteins are associated with autosomes and very few with X chromosome, Y chromosome and cytoplasm; (3) when proteins showing both quantitative and qualitative differences between the two species were examined in F1 hybrid males, most (97.4%) proteins were expressed at levels between the two parents and no sign of large scale changes in spot density was observed. All the proteins observed in the two parental species were present in F1 hybrid males except two species-specific proteins that may be encoded (or regulated) by sex chromosomes; (4) when different fertile and sterile backcross male testes were compared, a few D. sechellia-specific proteins were identified to be consistently associated with male sterility. These results along with the observation that a large proportion (23.6%) of first generation backcross males were fertile show that hybrid male sterility between D. simulans and D. sechellia involves a relatively small number of genes. Role of large scale genetic changes due to general genome incompatibility is not supported. The results also suggest that the large effect of X chromosome on hybrid male sterility is not due to higher divergence of X chromosome than autosomes.
NASA Technical Reports Server (NTRS)
Castle, J. G.
1976-01-01
A literature survey is presented covering nondestructive methods of electrical characterization of semiconductors. A synopsis of each technique deals with the applicability of the techniques to various device parameters and to potential in-flight use before, during, and after growth experiments on space flights. It is concluded that the very recent surge in the commercial production of large scale integrated circuitry and other semiconductor arrays requiring uniformity on the scale of a few microns, involves nondestructive test procedures which could well be useful to NASA for in-flight use in space processing.
Abebe, Gumataw K; Chalak, Ali; Abiad, Mohamad G
2017-07-01
Food safety is a key public health issue worldwide. This study aims to characterise existing governance mechanisms - governance structures (GSs) and food safety management systems (FSMSs) - and analyse the alignment thereof in detecting food safety hazards, based on empirical evidence from Lebanon. Firm-to-firm and public baseline are the dominant FSMSs applied in a large-scale, while chain-wide FSMSs are observed only in a small-scale. Most transactions involving farmers are relational and market-based in contrast to (large-scale) processors, which opt for hierarchical GSs. Large-scale processors use a combination of FSMSs and GSs to minimise food safety hazards albeit potential increase in coordination costs; this is an important feature of modern food supply chains. The econometric analysis reveals contract period, on-farm inspection and experience having significant effects in minimising food safety hazards. However, the potential to implement farm-level FSMS is influenced by formality of the contract, herd size, trading partner choice, and experience. Public baseline FSMSs appear effective in controlling food safety hazards; however, this may not be viable due to the scarcity of public resources. We suggest public policies to focus on long-lasting governance mechanisms by introducing incentive schemes and farm-level FSMSs by providing loans and education to farmers. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Cooperation, collective action, and the archeology of large-scale societies.
Carballo, David M; Feinman, Gary M
2016-11-01
Archeologists investigating the emergence of large-scale societies in the past have renewed interest in examining the dynamics of cooperation as a means of understanding societal change and organizational variability within human groups over time. Unlike earlier approaches to these issues, which used models designated voluntaristic or managerial, contemporary research articulates more explicitly with frameworks for cooperation and collective action used in other fields, thereby facilitating empirical testing through better definition of the costs, benefits, and social mechanisms associated with success or failure in coordinated group action. Current scholarship is nevertheless bifurcated along lines of epistemology and scale, which is understandable but problematic for forging a broader, more transdisciplinary field of cooperation studies. Here, we point to some areas of potential overlap by reviewing archeological research that places the dynamics of social cooperation and competition in the foreground of the emergence of large-scale societies, which we define as those having larger populations, greater concentrations of political power, and higher degrees of social inequality. We focus on key issues involving the communal-resource management of subsistence and other economic goods, as well as the revenue flows that undergird political institutions. Drawing on archeological cases from across the globe, with greater detail from our area of expertise in Mesoamerica, we offer suggestions for strengthening analytical methods and generating more transdisciplinary research programs that address human societies across scalar and temporal spectra. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii
2017-02-01
Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
The Price of Precision: Large-Scale Mapping of Forest Structure and Biomass Using Airborne Lidar
NASA Astrophysics Data System (ADS)
Dubayah, R.
2015-12-01
Lidar remote sensing provides one of the best means for acquiring detailed information on forest structure. However, its application over large areas has been limited largely because of its expense. Nonetheless, extant data exist over many states in the U.S., funded largely by state and federal consortia and mainly for infrastructure, emergency response, flood plain and coastal mapping. These lidar data are almost always acquired in leaf-off seasons, and until recently, usually with low point count densities. Even with these limitations, they provide unprecedented wall-to-wall mappings that enable development of appropriate methodologies for large-scale deployment of lidar. In this talk we summarize our research and lessons learned in deriving forest structure over regional areas as part of NASA's Carbon Monitoring System (CMS). We focus on two areas: the entire state of Maryland and Sonoma County, California. The Maryland effort used low density, leaf-off data acquired by each county in varying epochs, while the on-going Sonoma work employs state-of-the-art, high density, wall-to-wall, leaf-on lidar data. In each area we combine these lidar coverages with high-resolution multispectral imagery from the National Agricultural Imagery Program (NAIP) and in situ plot data to produce maps of canopy height, tree cover and biomass, and compare our results against FIA plot data and national biomass maps. Our work demonstrates that large-scale mapping of forest structure at high spatial resolution is achievable but products may be complex to produce and validate over large areas. Furthermore, fundamental issues involving statistical approaches, plot types and sizes, geolocation, modeling scales, allometry, and even the definitions of "forest" and "non-forest" must be approached carefully. Ultimately, determining the "price of precision", that is, does the value of wall-to-wall forest structure data justify their expense, should consider not only carbon market applications, but the other ways the underlying lidar data may be used.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
Understanding ocean acidification impacts on organismal to ecological scales
Andersson, Andreas J; Kline, David I; Edmunds, Peter J; Archer, Stephen D; Bednaršek, Nina; Carpenter, Robert C; Chadsey, Meg; Goldstein, Philip; Grottoli, Andrea G.; Hurst, Thomas P; King, Andrew L; Kübler, Janet E.; Kuffner, Ilsa B.; Mackey, Katherine R M; Menge, Bruce A.; Paytan, Adina; Riebesell, Ulf; Schnetzer, Astrid; Warner, Mark E; Zimmerman, Richard C
2015-01-01
Ocean acidification (OA) research seeks to understand how marine ecosystems and global elemental cycles will respond to changes in seawater carbonate chemistry in combination with other environmental perturbations such as warming, eutrophication, and deoxygenation. Here, we discuss the effectiveness and limitations of current research approaches used to address this goal. A diverse combination of approaches is essential to decipher the consequences of OA to marine organisms, communities, and ecosystems. Consequently, the benefits and limitations of each approach must be considered carefully. Major research challenges involve experimentally addressing the effects of OA in the context of large natural variability in seawater carbonate system parameters and other interactive variables, integrating the results from different research approaches, and scaling results across different temporal and spatial scales.
NASA Astrophysics Data System (ADS)
Olvera de La Cruz, Monica
Polymer electrolytes have been particularly difficult to describe theoretically given the large number of disparate length scales involved in determining their physical properties. The Debye length, the Bjerrum length, the ion size, the chain length, and the distance between the charges along their backbones determine their structure and their response to external fields. We have developed an approach that uses multi-scale calculations with the capability of demonstrating the phase behavior of polymer electrolytes and of providing a conceptual understanding of how charge dictates nano-scale structure formation. Moreover, our molecular dynamics simulations have provided an understanding of the coupling of their conformation to their dynamics, which is crucial to design self-assembling materials, as well as to explore the dynamics of complex electrolytes for energy storage and conversion applications.
NASA Astrophysics Data System (ADS)
Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha
2018-06-01
Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.
Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B
2011-09-01
Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting and analyzing burning cells from a large-scale turbulent combustion simulation. In particular, we show how the statistical analysis enabled by our techniques provides new insight into the combustion process.
Transient Structures and Possible Limits of Data Recording in Phase-Change Materials.
Hu, Jianbo; Vanacore, Giovanni M; Yang, Zhe; Miao, Xiangshui; Zewail, Ahmed H
2015-07-28
Phase-change materials (PCMs) represent the leading candidates for universal data storage devices, which exploit the large difference in the physical properties of their transitional lattice structures. On a nanoscale, it is fundamental to determine their performance, which is ultimately controlled by the speed limit of transformation among the different structures involved. Here, we report observation with atomic-scale resolution of transient structures of nanofilms of crystalline germanium telluride, a prototypical PCM, using ultrafast electron crystallography. A nonthermal transformation from the initial rhombohedral phase to the cubic structure was found to occur in 12 ps. On a much longer time scale, hundreds of picoseconds, equilibrium heating of the nanofilm is reached, driving the system toward amorphization, provided that high excitation energy is invoked. These results elucidate the elementary steps defining the structural pathway in the transformation of crystalline-to-amorphous phase transitions and describe the essential atomic motions involved when driven by an ultrafast excitation. The establishment of the time scales of the different transient structures, as reported here, permits determination of the possible limit of performance, which is crucial for high-speed recording applications of PCMs.
NASA Astrophysics Data System (ADS)
Casadei, F.; Ruzzene, M.
2011-04-01
This work illustrates the possibility to extend the field of application of the Multi-Scale Finite Element Method (MsFEM) to structural mechanics problems that involve localized geometrical discontinuities like cracks or notches. The main idea is to construct finite elements with an arbitrary number of edge nodes that describe the actual geometry of the damage with shape functions that are defined as local solutions of the differential operator of the specific problem according to the MsFEM approach. The small scale information are then brought to the large scale model through the coupling of the global system matrices that are assembled using classical finite element procedures. The efficiency of the method is demonstrated through selected numerical examples that constitute classical problems of great interest to the structural health monitoring community.
The morphing of geographical features by Fourier transformation
Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344
Dynamic Smagorinsky model on anisotropic grids
NASA Technical Reports Server (NTRS)
Scotti, A.; Meneveau, C.; Fatica, M.
1996-01-01
Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.
Reducing the two-loop large-scale structure power spectrum to low-dimensional, radial integrals
Schmittfull, Marcel; Vlah, Zvonimir
2016-11-28
Modeling the large-scale structure of the universe on nonlinear scales has the potential to substantially increase the science return of upcoming surveys by increasing the number of modes available for model comparisons. One way to achieve this is to model nonlinear scales perturbatively. Unfortunately, this involves high-dimensional loop integrals that are cumbersome to evaluate. Here, trying to simplify this, we show how two-loop (next-to-next-to-leading order) corrections to the density power spectrum can be reduced to low-dimensional, radial integrals. Many of those can be evaluated with a one-dimensional fast Fourier transform, which is significantly faster than the five-dimensional Monte-Carlo integrals thatmore » are needed otherwise. The general idea of this fast fourier transform perturbation theory method is to switch between Fourier and position space to avoid convolutions and integrate over orientations, leaving only radial integrals. This reformulation is independent of the underlying shape of the initial linear density power spectrum and should easily accommodate features such as those from baryonic acoustic oscillations. We also discuss how to account for halo bias and redshift space distortions.« less
Where the Wild Things Are: Observational Constraints on Black Holes' Growth
NASA Astrophysics Data System (ADS)
Merloni, Andrea
2009-12-01
The physical and evolutionary relation between growing supermassive black holes (AGN) and host galaxies is currently the subject of intense research activity. Nevertheless, a deep theoretical understanding of such a relation is hampered by the unique multi-scale nature of the combined AGN-galaxy system, which defies any purely numerical, or semi-analytic approach. Various physical process active on different physical scales have signatures in different parts of the electromagnetic spectrum; thus, observations at different wavelengths and theoretical ideas all can contribute towards a ``large dynamic range'' view of the AGN phenomenon, capable of conceptually ``resolving'' the many scales involved. As an example, I will focus in this review on two major recent observational results on the cosmic evolution of supermassive black holes, focusing on the novel contribution given to the field by the COSMOS survey. First of all, I will discuss the evidence for the so-called ``downsizing'' in the AGN population as derived from large X-ray surveys. I will then present new constraints on the evolution of the black hole-galaxy scaling relation at 1
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments.
Ionescu, Catalin; Papava, Dragos; Olaru, Vlad; Sminchisescu, Cristian
2014-07-01
We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.
Young Black Men and the Criminal Justice System: A Growing National Problem.
ERIC Educational Resources Information Center
Mauer, Marc
The impact of the criminal justice system on Black male adults in the 20-to-29 year age group was examined. End results of the large-scale involvement of young Black men in the criminal justice system are considered, and the implications for crime control are discussed. Using data from Bureau of Justice Statistics and the Bureau of the Census…
The status, recent progress and promise of superconducting materials for practical applications
NASA Astrophysics Data System (ADS)
Rowell, J. M.
1989-03-01
The author summarizes the progress in materials science and engineering that created today's superconducting technology. He reviews the state of the technology with conventional materials by looking at two particular applications: large-scale applications involving conductors, for example, magnets; and electronics and instrumentation applications. The state-of-the art is contrasted with the present understanding of the high-Tc oxide materials.
ERIC Educational Resources Information Center
Cerezo, M.A.; Pons-Salvador, G.
2004-01-01
Objectives:: The purpose of this 5-year study was to improve detection in two consecutive phases: (a) To close the gap between the number of identified cases and the actual number of cases of child abuse by increasing detection; and (b) To increase the possibility of a broader spectrum of detection. Method:: The Balearic Islands (one of the…
Prostate Cancer Prevention Through Induction of Phase 2 Enzymes
2001-04-01
enzymes. During our Phase I Award, we identified sulforaphane as the most potent inducer of carcinogen defenses in the prostate cell. We have...characterized global effects of sulforaphane in prostate cancer cell lines using cDNA microarray technology that allows large-scale determination of changes...of sulforaphane ) and decreased risk of prostate cancer. These findings argue strongly for a preventive intervention trial involving supplementation
Larry E. Laing; David Gori; James T. Jones
2005-01-01
The multi-partner Greater Huachuca Mountains fire planning effort involves over 500,000 acres of public and private lands. This large area supports distinct landscapes that have evolved with fire. Utilizing GIS as a tool, the United States Forest Service (USFS), General Ecosystem Survey (GES), and Natural Resources Conservation Service (NRCS) State Soil Geographic...
ERIC Educational Resources Information Center
Chu, Hye-Eun; Treagust, David F.; Chandrasegaran, A. L.
2009-01-01
A large scale study involving 1786 year 7-10 Korean students from three school districts in Seoul was undertaken to evaluate their understanding of basic optics concepts using a two-tier multiple-choice diagnostic instrument consisting of four pairs of items, each of which evaluated the same concept in two different contexts. The instrument, which…
Projecting large-scale area changes in land use and land cover for terrestrial carbon analyses.
Ralph J. Alig; Brett J. Butler
2004-01-01
One of the largest changes in US forest type areas over the last half-century has involved pine types in the South. The area of planted pine has increased more than 10-fold since 1950, mostly on private lands. Private landowners have responded to market incentives and government programs, including subsidized afforestation on marginal agricultural land. Timber harvest...
ERIC Educational Resources Information Center
Ecalle, Jean; Magnan, Annie; Gibert, Fabienne
2006-01-01
This article examines the impact of class size on literacy skills and on literacy interest in beginning readers from zones with specific educational needs in France. The data came from an experiment involving first graders in which teachers and pupils were randomly assigned to the different class types (small classes of 10-12 pupils vs. regular…
English Teachers in the Digital Age--A Case Study of Policy and Expert Practice from England
ERIC Educational Resources Information Center
Goodwyn, Andy
2011-01-01
This article is a case study of how English teachers in England have coped with the paradigm shift from print to digital literacy. It reviews a large scale national initiative that was intended to upskill all teachers, considers its weak impact and explores the author's involvement in the evaluation of the project's direct value to English…
Cognitive Model Exploration and Optimization: A New Challenge for Computational Science
2010-01-01
Introduction Research in cognitive science often involves the generation and analysis of computational cognitive models to explain various...HPC) clusters and volunteer computing for large-scale computational resources. The majority of applications on the Department of Defense HPC... clusters focus on solving partial differential equations (Post, 2009). These tend to be lean, fast models with little noise. While we lack specific
A new FIA-Type strategic inventory (NFI)
Richard A. Grotefendt; Hans T. Schreuder
2006-01-01
New remote sensing technologies are now available to lower the cost of doing strategic surveys. A new sampling approach for the Forest Inventory and Analysis program (FIA) of the U.S.D.A. Forest Service is discussed involving a bi-sampling unit (BSU) that is composed of a field sample unit (FSU) centered within a large scale (1:1,000 to 1:3,000) photo sample unit (PSU...
Probing large-scale magnetism with the cosmic microwave background
NASA Astrophysics Data System (ADS)
Giovannini, Massimo
2018-04-01
Prior to photon decoupling magnetic random fields of comoving intensity in the nano-Gauss range distort the temperature and the polarization anisotropies of the microwave background, potentially induce a peculiar B-mode power spectrum and may even generate a frequency-dependent circularly polarized V-mode. We critically analyze the theoretical foundations and the recent achievements of an interesting trialogue involving plasma physics, general relativity and astrophysics.
Improving health aid for a better planet: The planning, monitoring and evaluation tool (PLANET).
Sridhar, Devi; Car, Josip; Chopra, Mickey; Campbell, Harry; Woods, Ngaire; Rudan, Igor
2015-12-01
International development assistance for health (DAH) quadrupled between 1990 and 2012, from US$ 5.6 billion to US$ 28.1 billion. This generates an increasing need for transparent and replicable tools that could be used to set investment priorities, monitor the distribution of funding in real time, and evaluate the impact of those investments. In this paper we present a methodology that addresses these three challenges. We call this approach PLANET, which stands for planning, monitoring and evaluation tool. Fundamentally, PLANET is based on crowdsourcing approach to obtaining information relevant to deployment of large-scale programs. Information is contributed in real time by a diverse group of participants involved in the program delivery. PLANET relies on real-time information from three levels of participants in large-scale programs: funders, managers and recipients. At each level, information is solicited to assess five key risks that are most relevant to each level of operations. The risks at the level of funders involve systematic neglect of certain areas, focus on donor's interests over that of program recipients, ineffective co-ordination between donors, questionable mechanisms of delivery and excessive loss of funding to "middle men". At the level of managers, the risks are corruption, lack of capacity and/or competence, lack of information and /or communication, undue avoidance of governmental structures / preference to non-governmental organizations and exclusion of local expertise. At the level of primary recipients, the risks are corruption, parallel operations / "verticalization", misalignment with local priorities and lack of community involvement, issues with ethics, equity and/or acceptability, and low likelihood of sustainability beyond the end of the program's implementation. PLANET is intended as an additional tool available to policy-makers to prioritize, monitor and evaluate large-scale development programs. In this, it should complement tools such as LiST (for health care/interventions), EQUIST (for health care/interventions) and CHNRI (for health research), which also rely on information from local experts and on local context to set priorities in a transparent, user-friendly, replicable, quantifiable and specific, algorithmic-like manner.
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
Differentiating unipolar and bipolar depression by alterations in large-scale brain networks.
Goya-Maldonado, Roberto; Brodmann, Katja; Keil, Maria; Trost, Sarah; Dechent, Peter; Gruber, Oliver
2016-02-01
Misdiagnosing bipolar depression can lead to very deleterious consequences of mistreatment. Although depressive symptoms may be similarly expressed in unipolar and bipolar disorder, changes in specific brain networks could be very distinct, being therefore informative markers for the differential diagnosis. We aimed to characterize specific alterations in candidate large-scale networks (frontoparietal, cingulo-opercular, and default mode) in symptomatic unipolar and bipolar patients using resting state fMRI, a cognitively low demanding paradigm ideal to investigate patients. Networks were selected after independent component analysis, compared across 40 patients acutely depressed (20 unipolar, 20 bipolar), and 20 controls well-matched for age, gender, and education levels, and alterations were correlated to clinical parameters. Despite comparable symptoms, patient groups were robustly differentiated by large-scale network alterations. Differences were driven in bipolar patients by increased functional connectivity in the frontoparietal network, a central executive and externally-oriented network. Conversely, unipolar patients presented increased functional connectivity in the default mode network, an introspective and self-referential network, as much as reduced connectivity of the cingulo-opercular network to default mode regions, a network involved in detecting the need to switch between internally and externally oriented demands. These findings were mostly unaffected by current medication, comorbidity, and structural changes. Moreover, network alterations in unipolar patients were significantly correlated to the number of depressive episodes. Unipolar and bipolar groups displaying similar symptomatology could be clearly distinguished by characteristic changes in large-scale networks, encouraging further investigation of network fingerprints for clinical use. Hum Brain Mapp 37:808-818, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Large Scale Water Vapor Sources Relative to the October 2000 Piedmont Flood
NASA Technical Reports Server (NTRS)
Turato, Barbara; Reale, Oreste; Siccardi, Franco
2003-01-01
Very intense mesoscale or synoptic-scale rainfall events can occasionally be observed in the Mediterranean region without any deep cyclone developing over the areas affected by precipitation. In these perplexing cases the synoptic situation can superficially look similar to cases in which very little precipitation occurs. These situations could possibly baffle the operational weather forecasters. In this article, the major precipitation event that affected Piedmont (Italy) between 13 and 16 October 2000 is investigated. This is one of the cases in which no intense cyclone was observed within the Mediterranean region at any time, only a moderate system was present, and yet exceptional rainfall and flooding occurred. The emphasis of this study is on the moisture origin and transport. Moisture and energy balances are computed on different space- and time-scales, revealing that precipitation exceeds evaporation over an area inclusive of Piedmont and the northwestern Mediterranean region, on a time-scale encompassing the event and about two weeks preceding it. This is suggestive of an important moisture contribution originating from outside the region. A synoptic and dynamic analysis is then performed to outline the potential mechanisms that could have contributed to the large-scale moisture transport. The central part of the work uses a quasi-isentropic water-vapor back trajectory technique. The moisture sources obtained by this technique are compared with the results of the balances and with the synoptic situation, to unveil possible dynamic mechanisms and physical processes involved. It is found that moisture sources on a variety of atmospheric scales contribute to this event. First, an important contribution is caused by the extratropical remnants of former tropical storm Leslie. The large-scale environment related to this system allows a significant amount of moisture to be carried towards Europe. This happens on a time- scale of about 5-15 days preceding the Piedmont event. Second, water-vapor intrusions from the African Inter-Tropical Convergence Zone and evaporation from the eastern Atlantic contribute on the 2-5 day time-scale. The large-scale moist dynamics appears therefore to be one important factor enabling a moderate Mediterranean cyclone to produce heavy precipitation. Finally, local evaporation from the Mediterranean, water-vapor recycling, and orographically-induced low-level convergence enhance and concentrate the moisture over the area where heavy precipitation occurs. This happens on a 12-72 hour time-scale.
NASA Astrophysics Data System (ADS)
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
Neurolinguistic approach to natural language processing with applications to medical text analysis.
Duch, Włodzisław; Matykiewicz, Paweł; Pestian, John
2008-12-01
Understanding written or spoken language presumably involves spreading neural activation in the brain. This process may be approximated by spreading activation in semantic networks, providing enhanced representations that involve concepts not found directly in the text. The approximation of this process is of great practical and theoretical interest. Although activations of neural circuits involved in representation of words rapidly change in time snapshots of these activations spreading through associative networks may be captured in a vector model. Concepts of similar type activate larger clusters of neurons, priming areas in the left and right hemisphere. Analysis of recent brain imaging experiments shows the importance of the right hemisphere non-verbal clusterization. Medical ontologies enable development of a large-scale practical algorithm to re-create pathways of spreading neural activations. First concepts of specific semantic type are identified in the text, and then all related concepts of the same type are added to the text, providing expanded representations. To avoid rapid growth of the extended feature space after each step only the most useful features that increase document clusterization are retained. Short hospital discharge summaries are used to illustrate how this process works on a real, very noisy data. Expanded texts show significantly improved clustering and may be classified with much higher accuracy. Although better approximations to the spreading of neural activations may be devised a practical approach presented in this paper helps to discover pathways used by the brain to process specific concepts, and may be used in large-scale applications.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Extracting Primordial Non-Gaussianity from Large Scale Structure in the Post-Planck Era
NASA Astrophysics Data System (ADS)
Dore, Olivier
Astronomical observations have become a unique tool to probe fundamental physics. Cosmology, in particular, emerged as a data-driven science whose phenomenological modeling has achieved great success: in the post-Planck era, key cosmological parameters are measured to percent precision. A single model reproduces a wealth of astronomical observations involving very distinct physical processes at different times. This success leads to fundamental physical questions. One of the most salient is the origin of the primordial perturbations that grew to form the large-scale structures we now observe. More and more cosmological observables point to inflationary physics as the origin of the structure observed in the universe. Inflationary physics predict the statistical properties of the primordial perturbations and it is thought to be slightly non-Gaussian. The detection of this small deviation from Gaussianity represents the next frontier in early Universe physics. To measure it would provide direct, unique and quantitative insights about the physics at play when the Universe was only a fraction of a second old, thus probing energies untouchable otherwise. En par with the well-known relic gravitational wave radiation -- the famous ``B-modes'' -- it is one the few probes of inflation. This departure from Gaussianity leads to very specific signature in the large scale clustering of galaxies. Observing large-scale structure, we can thus establish a direct connection with fundamental theories of the early universe. In the post-Planck era, large-scale structures are our most promising pathway to measuring this primordial signal. Current estimates suggests that the next generation of space or ground based large scale structure surveys (e.g. the ESA EUCLID or NASA WFIRST missions) might enable a detection of this signal. This potential huge payoff requires us to solidify the theoretical predictions supporting these measurements. Even if the exact signal we are looking for is of unknown amplitude, it is obvious that we must measure it as well as these ground breaking data set will permit. We propose to develop the supporting theoretical work to the point where the complete non-gaussianian signature can be extracted from these data sets. We will do so by developing three complementary directions: - We will develop the appropriate formalism to measure and model galaxy clustering on the largest scales. - We will study the impact of non-Gaussianity on higher-order statistics, the most promising statistics for our purpose.. - We will explicit the connection between these observables and the microphysics of a large class of inflation models, but also identify fundamental limitations to this interpretation.
Visualization of nanocrystal breathing modes at extreme strains
NASA Astrophysics Data System (ADS)
Szilagyi, Erzsi; Wittenberg, Joshua S.; Miller, Timothy A.; Lutker, Katie; Quirin, Florian; Lemke, Henrik; Zhu, Diling; Chollet, Matthieu; Robinson, Joseph; Wen, Haidan; Sokolowski-Tinten, Klaus; Lindenberg, Aaron M.
2015-03-01
Nanoscale dimensions in materials lead to unique electronic and structural properties with applications ranging from site-specific drug delivery to anodes for lithium-ion batteries. These functional properties often involve large-amplitude strains and structural modifications, and thus require an understanding of the dynamics of these processes. Here we use femtosecond X-ray scattering techniques to visualize, in real time and with atomic-scale resolution, light-induced anisotropic strains in nanocrystal spheres and rods. Strains at the percent level are observed in CdS and CdSe samples, associated with a rapid expansion followed by contraction along the nanosphere or nanorod radial direction driven by a transient carrier-induced stress. These morphological changes occur simultaneously with the first steps in the melting transition on hundreds of femtosecond timescales. This work represents the first direct real-time probe of the dynamics of these large-amplitude strains and shape changes in few-nanometre-scale particles.
Testing the robustness of Citizen Science projects: Evaluating the results of pilot project COMBER.
Chatzigeorgiou, Giorgos; Faulwetter, Sarah; Dailianis, Thanos; Smith, Vincent Stuart; Koulouri, Panagiota; Dounas, Costas; Arvanitidis, Christos
2016-01-01
Citizen Science (CS) as a term implies a great deal of approaches and scopes involving many different fields of science. The number of the relevant projects globally has been increased significantly in the recent years. Large scale ecological questions can be answered only through extended observation networks and CS projects can support this effort. Although the need of such projects is apparent, an important part of scientific community cast doubt on the reliability of CS data sets. The pilot CS project COMBER has been created in order to provide evidence to answer the aforementioned question in the coastal marine biodiversity monitoring. The results of the current analysis show that a carefully designed CS project with clear hypotheses, wide participation and data sets validation, can be a valuable tool for the large scale and long term changes in marine biodiversity pattern change and therefore for relevant management and conservation issues.
Mother Nature versus human nature: public compliance with evacuation and quarantine.
Manuell, Mary-Elise; Cukor, Jeffrey
2011-04-01
Effectively controlling the spread of contagious illnesses has become a critical focus of disaster planning. It is likely that quarantine will be a key part of the overall public health strategy utilised during a pandemic, an act of bioterrorism or other emergencies involving contagious agents. While the United States lacks recent experience of large-scale quarantines, it has considerable accumulated experience of large-scale evacuations. Risk perception, life circumstance, work-related issues, and the opinions of influential family, friends and credible public spokespersons all play a role in determining compliance with an evacuation order. Although the comparison is not reported elsewhere to our knowledge, this review of the principal factors affecting compliance with evacuations demonstrates many similarities with those likely to occur during a quarantine. Accurate identification and understanding of barriers to compliance allows for improved planning to protect the public more effectively. © 2011 The Author(s). Disasters © Overseas Development Institute, 2011.
Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim
2017-06-15
Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.
NASA Astrophysics Data System (ADS)
Neklyudov, A. A.; Savenkov, V. N.; Sergeyez, A. G.
1984-06-01
Memories are improved by increasing speed or the memory volume on a single chip. The most effective means for increasing speeds in bipolar memories are current control circuits with the lowest extraction times for a specific power consumption (1/4 pJ/bit). The control current circuitry involves multistage current switches and circuits accelerating transient processes in storage elements and links. Circuit principles for the design of bipolar memories with maximum speeds for an assigned minimum of circuit topology are analyzed. Two main classes of storage with current control are considered: the ECL type and super-integrated injection type storage with data capacities of N = 1/4 and N 4/16, respectively. The circuits reduce logic voltage differentials and the volumes of lexical and discharge buses and control circuit buses. The limiting speed is determined by the antiinterference requirements of the memory in storage and extraction modes.
Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study
Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng
2016-01-01
One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298
Teaching mathematics online in the European Area of Higher Education: an instructor's point of view
NASA Astrophysics Data System (ADS)
Juan, Angel A.; Steegmann, Cristina; Huertas, Antonia; Martinez, M. Jesus; Simosa, J.
2011-03-01
This article first discusses how information technologies are changing the way knowledge is delivered at universities worldwide. Then, the article reviews some of the most popular learning management systems available today and some of the most useful online resources in the areas of Mathematics and Statistics. After that, some long-term experiences regarding the teaching of online courses in those areas at the Open University of Catalonia are discussed. Finally, the article presents the results of a large-scale survey performed in Spain that aims to reflect instructors' opinions and feelings about potential benefits and challenges of teaching mathematics online, as well as the role of emergent technologies in the context of the European Area of Higher Education. Therefore, this article contributes to the existing literature as an additional reference point, one based on our long-term experience in a large-scale online environment, for discussions involving mathematical e-learning.
Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.
Demchak, Barry; Krüger, Ingolf
2012-07-01
The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.
Are large-scale flow experiments informing the science and management of freshwater ecosystems?
Olden, Julian D.; Konrad, Christopher P.; Melis, Theodore S.; Kennard, Mark J.; Freeman, Mary C.; Mims, Meryl C.; Bray, Erin N.; Gido, Keith B.; Hemphill, Nina P.; Lytle, David A.; McMullen, Laura E.; Pyron, Mark; Robinson, Christopher T.; Schmidt, John C.; Williams, John G.
2013-01-01
Greater scientific knowledge, changing societal values, and legislative mandates have emphasized the importance of implementing large-scale flow experiments (FEs) downstream of dams. We provide the first global assessment of FEs to evaluate their success in advancing science and informing management decisions. Systematic review of 113 FEs across 20 countries revealed that clear articulation of experimental objectives, while not universally practiced, was crucial for achieving management outcomes and changing dam-operating policies. Furthermore, changes to dam operations were three times less likely when FEs were conducted primarily for scientific purposes. Despite the recognized importance of riverine flow regimes, four-fifths of FEs involved only discrete flow events. Over three-quarters of FEs documented both abiotic and biotic outcomes, but only one-third examined multiple taxonomic responses, thus limiting how FE results can inform holistic dam management. Future FEs will present new opportunities to advance scientifically credible water policies.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
General relativistic screening in cosmological simulations
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Paranjape, Aseem
2016-10-01
We revisit the issue of interpreting the results of large volume cosmological simulations in the context of large-scale general relativistic effects. We look for simple modifications to the nonlinear evolution of the gravitational potential ψ that lead on large scales to the correct, fully relativistic description of density perturbations in the Newtonian gauge. We note that the relativistic constraint equation for ψ can be cast as a diffusion equation, with a diffusion length scale determined by the expansion of the Universe. Exploiting the weak time evolution of ψ in all regimes of interest, this equation can be further accurately approximated as a Helmholtz equation, with an effective relativistic "screening" scale ℓ related to the Hubble radius. We demonstrate that it is thus possible to carry out N-body simulations in the Newtonian gauge by replacing Poisson's equation with this Helmholtz equation, involving a trivial change in the Green's function kernel. Our results also motivate a simple, approximate (but very accurate) gauge transformation—δN(k )≈δsim(k )×(k2+ℓ-2)/k2 —to convert the density field δsim of standard collisionless N -body simulations (initialized in the comoving synchronous gauge) into the Newtonian gauge density δN at arbitrary times. A similar conversion can also be written in terms of particle positions. Our results can be interpreted in terms of a Jeans stability criterion induced by the expansion of the Universe. The appearance of the screening scale ℓ in the evolution of ψ , in particular, leads to a natural resolution of the "Jeans swindle" in the presence of superhorizon modes.
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.
Protein docking by the interface structure similarity: how much structure is needed?
Sinha, Rohita; Kundrotas, Petras J; Vakser, Ilya A
2012-01-01
The increasing availability of co-crystallized protein-protein complexes provides an opportunity to use template-based modeling for protein-protein docking. Structure alignment techniques are useful in detection of remote target-template similarities. The size of the structure involved in the alignment is important for the success in modeling. This paper describes a systematic large-scale study to find the optimal definition/size of the interfaces for the structure alignment-based docking applications. The results showed that structural areas corresponding to the cutoff values <12 Å across the interface inadequately represent structural details of the interfaces. With the increase of the cutoff beyond 12 Å, the success rate for the benchmark set of 99 protein complexes, did not increase significantly for higher accuracy models, and decreased for lower-accuracy models. The 12 Å cutoff was optimal in our interface alignment-based docking, and a likely best choice for the large-scale (e.g., on the scale of the entire genome) applications to protein interaction networks. The results provide guidelines for the docking approaches, including high-throughput applications to modeled structures.
Lee waves, benign and malignant
NASA Technical Reports Server (NTRS)
Wurtele, M. G.; Datta, A.
1992-01-01
The flow of an incompressible, stratified fluid over an obstacle will produce an oscillation in which buoyancy is the restoring force, called a gravity wave. For disturbances of this scale, the atmosphere may be treated as incompressible; and even the linear approximation will explain many of the phenomena observed in the lee of mountains. However, nonlinearities arise in two ways: (1) through the large (scaled) size of the mountain, and (2) from dynamically singular levels in the fluid field. These produce a complicated array of phenomena that present hazards to aircraft and to lee surface areas. If there is no dynamic barrier, these waves can penetrate vertically into the middle atmosphere (30-100 km attitude), where recent observations show them to be of a length scale that must involve the Coriolis force in any modeling. At these altitudes, the amplitude of the waves is very large, and the waves are studied with a view to their potential impact on the projected National Aerospace Plane. This paper presents the results of analyses and state-of-the-art numerical simulations, validated where possible by observational data.
Pan, Joshua; Meyers, Robin M; Michel, Brittany C; Mashtalir, Nazar; Sizemore, Ann E; Wells, Jonathan N; Cassel, Seth H; Vazquez, Francisca; Weir, Barbara A; Hahn, William C; Marsh, Joseph A; Tsherniak, Aviad; Kadoch, Cigall
2018-05-23
Protein complexes are assemblies of subunits that have co-evolved to execute one or many coordinated functions in the cellular environment. Functional annotation of mammalian protein complexes is critical to understanding biological processes, as well as disease mechanisms. Here, we used genetic co-essentiality derived from genome-scale RNAi- and CRISPR-Cas9-based fitness screens performed across hundreds of human cancer cell lines to assign measures of functional similarity. From these measures, we systematically built and characterized functional similarity networks that recapitulate known structural and functional features of well-studied protein complexes and resolve novel functional modules within complexes lacking structural resolution, such as the mammalian SWI/SNF complex. Finally, by integrating functional networks with large protein-protein interaction networks, we discovered novel protein complexes involving recently evolved genes of unknown function. Taken together, these findings demonstrate the utility of genetic perturbation screens alone, and in combination with large-scale biophysical data, to enhance our understanding of mammalian protein complexes in normal and disease states. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jager, Yetta; Forsythe, Patrick S.; McLaughlin, Robert L.
The majority of large North American rivers are fragmented by dams that interrupt migrations of wide-ranging fishes like sturgeons. Reconnecting habitat is viewed as an important means of protecting sturgeon species in U.S. rivers because these species have lost between 5% and 60% of their historical ranges. Unfortunately, facilities designed to pass other fishes have rarely worked well for sturgeons. The most successful passage facilities were sized appropriately for sturgeons and accommodated bottom-oriented species. For upstream passage, facilities with large entrances, full-depth guidance systems, large lifts, or wide fishways without obstructions or tight turns worked well. However, facilitating upstream migrationmore » is only half the battle. Broader recovery for linked sturgeon populations requires safe round-trip passage involving multiple dams. The most successful downstream passage facilities included nature-like fishways, large canal bypasses, and bottom-draw sluice gates. We outline an adaptive approach to implementing passage that begins with temporary programs and structures and monitors success both at the scale of individual fish at individual dams and the scale of metapopulations in a river basin. The challenge will be to learn from past efforts and reconnect North American sturgeon populations in a way that promotes range expansion and facilitates population recovery.« less
Reconnecting fragmented sturgeon populations in North American rivers
Jager, Henriette; Parsley, Michael J.; Cech, Joseph J. Jr.; McLaughlin, R.L.; Forsythe, Patrick S.; Elliott, Robert S.
2016-01-01
The majority of large North American rivers are fragmented by dams that interrupt migrations of wide-ranging fishes like sturgeons. Reconnecting habitat is viewed as an important means of protecting sturgeon species in U.S. rivers because these species have lost between 5% and 60% of their historical ranges. Unfortunately, facilities designed to pass other fishes have rarely worked well for sturgeons. The most successful passage facilities were sized appropriately for sturgeons and accommodated bottom-oriented species. For upstream passage, facilities with large entrances, full-depth guidance systems, large lifts, or wide fishways without obstructions or tight turns worked well. However, facilitating upstream migration is only half the battle. Broader recovery for linked sturgeon populations requires safe “round-trip” passage involving multiple dams. The most successful downstream passage facilities included nature-like fishways, large canal bypasses, and bottom-draw sluice gates. We outline an adaptive approach to implementing passage that begins with temporary programs and structures and monitors success both at the scale of individual fish at individual dams and the scale of metapopulations in a river basin. The challenge will be to learn from past efforts and reconnect North American sturgeon populations in a way that promotes range expansion and facilitates population recovery.
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...
2016-09-18
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
Bachmann, Talis
2015-01-01
Perceptual phenomena such as spatio-temporal illusions and masking are typically explained by psychological (cognitive) processing theories or large-scale neural theories involving inter-areal connectivity and neural circuits comprising of hundreds or more interconnected single cells. Subcellular mechanisms are hardly used for such purpose. Here, a mechanistic theoretical view is presented on how a subcellular brain mechanism of integration of presynaptic signals that arrive at different compartments of layer-5 pyramidal neurons could explain a couple of spatiotemporal visual-phenomenal effects unfolding along very brief time intervals within the range of the sub-second temporal scale.
Natural fracture systems on planetary surfaces: Genetic classification and pattern randomness
NASA Technical Reports Server (NTRS)
Rossbacher, Lisa A.
1987-01-01
One method for classifying natural fracture systems is by fracture genesis. This approach involves the physics of the formation process, and it has been used most frequently in attempts to predict subsurface fractures and petroleum reservoir productivity. This classification system can also be applied to larger fracture systems on any planetary surface. One problem in applying this classification system to planetary surfaces is that it was developed for ralatively small-scale fractures that would influence porosity, particularly as observed in a core sample. Planetary studies also require consideration of large-scale fractures. Nevertheless, this system offers some valuable perspectives on fracture systems of any size.
Very Large Scale Integrated Circuits for Military Systems.
1981-01-01
ABBREVIATIONS A/D Analog-to-digital C AGC Automatic Gain Control A A/J Anti-jam ASP Advanced Signal Processor AU Arithmetic Units C.AD Computer-Aided...ESM) equipments (Ref. 23); in lieu of an adequate automatic proces- sing capability, the function is now performed manually (Ref. 24), which involves...a human operator, displays, etc., and a sacrifice in performance (acquisition speed, saturation signal density). Various automatic processing
ERIC Educational Resources Information Center
Polanin, Joshua R.; Wilson, Sandra Jo
2014-01-01
The purpose of this project is to demonstrate the practical methods developed to utilize a dataset consisting of both multivariate and multilevel effect size data. The context for this project is a large-scale meta-analytic review of the predictors of academic achievement. This project is guided by three primary research questions: (1) How do we…
Patterns Cancer Prevention Through Induction of Phase 2 Enzymes
2003-04-01
2) enzymes. During our Phase I Award, we identified sulforaphane as the most potent inducer of carcinogen defenses in the prostate cell. We have...characterized global effects of sulforaphane in prostate cancer cell lines using cDNA microarray technology that allows large-scale determination of...changes in gene expression. These findings argue strongly for a preventive intervention trial involving with sulforaphane . During our Phase 2 Award, we used
ERIC Educational Resources Information Center
Muscott, Howard S.; Mann, Eric L.; LeBrun, Marcel R.
2008-01-01
This evaluation report presents outcomes for the first cohort of 28 early childhood education programs and K-12 schools involved in implementing schoolwide positive behavior support as part of a statewide systems change initiative that began in New Hampshire in 2002. Results indicate that the overwhelming majority of schools were able to implement…
Titanium Combustion in Turbine Engines
1979-07-01
metal fires . Due to the refractory nature of metal oxides, mcst of the heat of reac- tion will reside in the enthalpy of the products. In a surface...more closely approx- imates the conditions prevailing in large-scale accidental metal fires . The objectives of this research have been to better define...conditions on the propagating molten mass of burning metal. Metal fires involve propagating combus- tion, and the propagation of burning-must be
Modelling short pulse, high intensity laser plasma interactions
NASA Astrophysics Data System (ADS)
Evans, R. G.
2006-06-01
Modelling the interaction of ultra-intense laser pulses with solid targets is made difficult through the large range of length and time scales involved in the transport of relativistic electrons. An implicit hybrid PIC-fluid model using the commercial code LSP (LSP is marketed by MRC (Albuquerque), New Mexico, USA) reveals a variety of complex phenomena which seem to be borne out in experiments and some existing theories.
LAVA: Large scale Automated Vulnerability Addition
2016-05-23
memory copy, e.g., are reasonable attack points. If the goal is to inject divide- by-zero, then arithmetic operations involving division will be...ways. First, it introduces deterministic record and replay , which can be used for iterated and expensive analyses that cannot be performed online... memory . Since our approach records the correspondence between source lines and program basic block execution, it would be just as easy to figure out
NASA Astrophysics Data System (ADS)
Borzí, Alfio; Caponigro, Marco
2016-09-01
The formulation of mathematical models for crowd dynamics is one current challenge in many fields of applied sciences. It involves the modelization of the complex behavior of a large number of individuals. In particular, the difficulty lays in describing emerging collective behaviors by means of a relatively small number of local interaction rules between individuals in a crowd. Clearly, the individual's free will involved in decision making processes and in the management of the social interactions cannot be described by a finite number of deterministic rules. On the other hand, in large crowds, this individual indeterminacy can be considered as a local fluctuation averaged to zero by the size of the crowd. While at the microscopic scale, using a system of coupled ODEs, the free will should be included in the mathematical description (e.g. with a stochastic term), the mesoscopic and macroscopic scales, modeled by PDEs, represent a powerful modelling tool that allows to neglect this feature and provide a reliable description. In this sense, the work by Bellomo, Clarke, Gibelli, Townsend, and Vreugdenhil [2] represents a mathematical-epistemological contribution towards the design of a reliable model of human behavior.
Altered intrinsic and extrinsic connectivity in schizophrenia.
Zhou, Yuan; Zeidman, Peter; Wu, Shihao; Razi, Adeel; Chen, Cheng; Yang, Liuqing; Zou, Jilin; Wang, Gaohua; Wang, Huiling; Friston, Karl J
2018-01-01
Schizophrenia is a disorder characterized by functional dysconnectivity among distributed brain regions. However, it is unclear how causal influences among large-scale brain networks are disrupted in schizophrenia. In this study, we used dynamic causal modeling (DCM) to assess the hypothesis that there is aberrant directed (effective) connectivity within and between three key large-scale brain networks (the dorsal attention network, the salience network and the default mode network) in schizophrenia during a working memory task. Functional MRI data during an n-back task from 40 patients with schizophrenia and 62 healthy controls were analyzed. Using hierarchical modeling of between-subject effects in DCM with Parametric Empirical Bayes, we found that intrinsic (within-region) and extrinsic (between-region) effective connectivity involving prefrontal regions were abnormal in schizophrenia. Specifically, in patients (i) inhibitory self-connections in prefrontal regions of the dorsal attention network were decreased across task conditions; (ii) extrinsic connectivity between regions of the default mode network was increased; specifically, from posterior cingulate cortex to the medial prefrontal cortex; (iii) between-network extrinsic connections involving the prefrontal cortex were altered; (iv) connections within networks and between networks were correlated with the severity of clinical symptoms and impaired cognition beyond working memory. In short, this study revealed the predominance of reduced synaptic efficacy of prefrontal efferents and afferents in the pathophysiology of schizophrenia.
Recht, Lee; Töpfer, Nadine; Batushansky, Albert; Sikron, Noga; Gibon, Yves; Fait, Aaron; Nikoloski, Zoran; Boussiba, Sammy; Zarka, Aliza
2014-10-31
The green alga Hematococcus pluvialis accumulates large amounts of the antioxidant astaxanthin under inductive stress conditions, such as nitrogen starvation. The response to nitrogen starvation and high light leads to the accumulation of carbohydrates and fatty acids as well as increased activity of the tricarboxylic acid cycle. Although the behavior of individual pathways has been well investigated, little is known about the systemic effects of the stress response mechanism. Here we present time-resolved metabolite, enzyme activity, and physiological data that capture the metabolic response of H. pluvialis under nitrogen starvation and high light. The data were integrated into a putative genome-scale model of the green alga to in silico test hypotheses of underlying carbon partitioning. The model-based hypothesis testing reinforces the involvement of starch degradation to support fatty acid synthesis in the later stages of the stress response. In addition, our findings support a possible mechanism for the involvement of the increased activity of the tricarboxylic acid cycle in carbon repartitioning. Finally, the in vitro experiments and the in silico modeling presented here emphasize the predictive power of large scale integrative approaches to pinpoint metabolic adjustment to changing environments. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
An efficient photogrammetric stereo matching method for high-resolution images
NASA Astrophysics Data System (ADS)
Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao
2016-12-01
Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.
Cognitive, Affective, and Conative Theory of Mind (ToM) in Children with Traumatic Brain Injury
Dennis, Maureen; Simic, Nevena; Bigler, Erin D.; Abildskov, Tracy; Agostino, Alba; Taylor, H. Gerry; Rubin, Kenneth; Vannatta, Kathryn; Gerhardt, Cynthia A.; Stancin, Terry; Yeates, Keith Owen
2012-01-01
We studied three forms of dyadic communication involving theory of mind (ToM) in 82 children with traumatic brain injury (TBI) and 61 children with orthopedic injury (OI): Cognitive (concerned with false belief), Affective (concerned with expressing socially deceptive facial expressions), and Conative (concerned with influencing another’s thoughts or feelings). We analyzed the pattern of brain lesions in the TBI group and conducted voxel-based morphometry for all participants in five large-scale functional brain networks, and related lesion and volumetric data to ToM outcomes. Children with TBI exhibited difficulty with Cognitive, Affective, and Conative ToM. The perturbation threshold for Cognitive ToM is higher than that for Affective and Conative ToM, in that Severe TBI disturbs Cognitive ToM but even Mild-Moderate TBI disrupt Affective and Conative ToM. Childhood TBI was associated with damage to all five large-scale brain networks. Lesions in the Mirror Neuron Empathy network predicted lower Conative ToM involving ironic criticism and empathic praise. Conative ToM was significantly and positively related to the package of Default Mode, Central Executive, and Mirror Neuron Empathy networks and, more specifically, to two hubs of the Default Mode network, the posterior cingulate/retrosplenial cortex and the hippocampal formation, including entorhinal cortex and parahippocampal cortex. PMID:23291312
Impact vaporization: Late time phenomena from experiments
NASA Technical Reports Server (NTRS)
Schultz, P. H.; Gault, D. E.
1987-01-01
While simple airflow produced by the outward movement of the ejecta curtain can be scaled to large dimensions, the interaction between an impact-vaporized component and the ejecta curtain is more complicated. The goal of these experiments was to examine such interaction in a real system involving crater growth, ejection of material, two phased mixtures of gas and dust, and strong pressure gradients. The results will be complemented by theoretical studies at laboratory scales in order to separate the various parameters for planetary scale processes. These experiments prompt, however, the following conclusions that may have relevance at broader scales. First, under near vacuum or low atmospheric pressures, an expanding vapor cloud scours the surrounding surface in advance of arriving ejecta. Second, the effect of early-time vaporization is relatively unimportant at late-times. Third, the overpressure created within the crater cavity by significant vaporization results in increased cratering efficiency and larger aspect ratios.
Short term evolution of coronal hole boundaries
NASA Technical Reports Server (NTRS)
Nolte, J. T.; Krieger, A. S.; Solodyna, C. V.
1978-01-01
The evolution of coronal hole boundary positions on a time scale of approximately 1 day is studied on the basis of an examination of all coronal holes observed by Skylab from May to November 1973. It is found that a substantial fraction (an average of 38%) of all coronal hole boundaries shifted by at least 1 deg heliocentric in the course of a day. Most (70%) of these changes were on a relatively small scale (less than 3 times the supergranulation cell size), but a significant fraction occurred as discrete events on a much larger scale. The large-scale shifts in the boundary locations involved changes in X-ray emission from these areas of the sun. There were generally more changes in the boundaries of the most rapidly evolving holes, but no simple relationship between the amount of change and the rate of hole growth or decay.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, Mie; Medford, Andrew J.; Norskov, Jens K.
Here, we present a generic analysis of the implications of energetic scaling relations on the possibilities for bifunctional gains at homogeneous bimetallic alloy catalysts. Such catalysts exhibit a large number of interface sites, where second-order reaction steps can involve intermediates adsorbed at different active sites. Using different types of model reaction schemes, we show that such site-coupling reaction steps can provide bifunctional gains that allow for a bimetallic catalyst composed of two individually poor catalyst materials to approach the activity of the optimal monomaterial catalyst. However, bifunctional gains cannot result in activities higher than the activity peak of the monomaterialmore » volcano curve as long as both sites obey similar scaling relations, as is generally the case for bimetallic catalysts. These scaling-relation-imposed limitations could be overcome by combining different classes of materials such as metals and oxides.« less
Andersen, Mie; Medford, Andrew J.; Norskov, Jens K.; ...
2017-04-14
Here, we present a generic analysis of the implications of energetic scaling relations on the possibilities for bifunctional gains at homogeneous bimetallic alloy catalysts. Such catalysts exhibit a large number of interface sites, where second-order reaction steps can involve intermediates adsorbed at different active sites. Using different types of model reaction schemes, we show that such site-coupling reaction steps can provide bifunctional gains that allow for a bimetallic catalyst composed of two individually poor catalyst materials to approach the activity of the optimal monomaterial catalyst. However, bifunctional gains cannot result in activities higher than the activity peak of the monomaterialmore » volcano curve as long as both sites obey similar scaling relations, as is generally the case for bimetallic catalysts. These scaling-relation-imposed limitations could be overcome by combining different classes of materials such as metals and oxides.« less
Crustal evolution inferred from Apollo magnetic measurements
NASA Technical Reports Server (NTRS)
Dyal, P.; Daily, W. D.; Vanyan, L. L.
1978-01-01
Magnetic field and solar wind plasma density measurements were analyzed to determine the scale size characteristics of remanent fields at the Apollo 12, 15, and 16 landing sites. Theoretical model calculations of the field-plasma interaction, involving diffusion of the remanent field into the solar plasma, were compared to the data. The information provided by all these experiments shows that remanent fields over most of the lunar surface are characterized by spatial variations as small as a few kilometers. Large regions (50 to 100 km) of the lunar crust were probably uniformly magnetized during early crustal evolution. Bombardment and subsequent gardening of the upper layers of these magnetized regions left randomly oriented, smaller scale (5 to 10 km) magnetic sources close to the surface. The larger scale size fields of magnitude approximately 0.1 gammas are measured by the orbiting subsatellite experiments and the small scale sized remanent fields of magnitude approximately 100 gammas are measured by the surface experiments.
The Center for Multiscale Plasma Dynamics, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gombosi, Tamas I.
The University of Michigan participated in the joint UCLA/Maryland fusion science center focused on plasma physics problems for which the traditional separation of the dynamics into microscale and macroscale processes breaks down. These processes involve large scale flows and magnetic fields tightly coupled to the small scale, kinetic dynamics of turbulence, particle acceleration and energy cascade. The interaction between these vastly disparate scales controls the evolution of the system. The enormous range of temporal and spatial scales associated with these problems renders direct simulation intractable even in computations that use the largest existing parallel computers. Our efforts focused on twomore » main problems: the development of Hall MHD solvers on solution adaptive grids and the development of solution adaptive grids using generalized coordinates so that the proper geometry of inertial confinement can be taken into account and efficient refinement strategies can be obtained.« less
Characterization of laser-induced plasmas as a complement to high-explosive large-scale detonations
Kimblin, Clare; Trainham, Rusty; Capelle, Gene A.; ...
2017-09-12
Experimental investigations into the characteristics of laser-induced plasmas indicate that LIBS provides a relatively inexpensive and easily replicable laboratory technique to isolate and measure reactions germane to understanding aspects of high-explosive detonations under controlled conditions. Furthermore, we examine spectral signatures and derived physical parameters following laser ablation of aluminum, graphite and laser-sparked air as they relate to those observed following detonation of high explosives and as they relate to shocked air. Laser-induced breakdown spectroscopy (LIBS) reliably correlates reactions involving atomic Al and aluminum monoxide (AlO) with respect to both emission spectra and temperatures, as compared to small- and large-scale high-explosivemore » detonations. Atomic Al and AlO resulting from laser ablation and a cited small-scale study, decay within ~10 -5 s, roughly 100 times faster than the Al and AlO decay rates (~10 -3 s) observed following the large-scale detonation of an Al-encased explosive. Temperatures and species produced in laser-sparked air are compared to those produced with laser ablated graphite in air. With graphite present, CN is dominant relative to N 2 + . Thus, in studies where the height of the ablating laser's focus was altered relative to the surface of the graphite substrate, CN concentration was found to decrease with laser focus below the graphite surface, indicating that laser intensity is a critical factor in the production of CN, via reactive nitrogen.« less
Characterization of laser-induced plasmas as a complement to high-explosive large-scale detonations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimblin, Clare; Trainham, Rusty; Capelle, Gene A.
Experimental investigations into the characteristics of laser-induced plasmas indicate that LIBS provides a relatively inexpensive and easily replicable laboratory technique to isolate and measure reactions germane to understanding aspects of high-explosive detonations under controlled conditions. Furthermore, we examine spectral signatures and derived physical parameters following laser ablation of aluminum, graphite and laser-sparked air as they relate to those observed following detonation of high explosives and as they relate to shocked air. Laser-induced breakdown spectroscopy (LIBS) reliably correlates reactions involving atomic Al and aluminum monoxide (AlO) with respect to both emission spectra and temperatures, as compared to small- and large-scale high-explosivemore » detonations. Atomic Al and AlO resulting from laser ablation and a cited small-scale study, decay within ~10 -5 s, roughly 100 times faster than the Al and AlO decay rates (~10 -3 s) observed following the large-scale detonation of an Al-encased explosive. Temperatures and species produced in laser-sparked air are compared to those produced with laser ablated graphite in air. With graphite present, CN is dominant relative to N 2 + . Thus, in studies where the height of the ablating laser's focus was altered relative to the surface of the graphite substrate, CN concentration was found to decrease with laser focus below the graphite surface, indicating that laser intensity is a critical factor in the production of CN, via reactive nitrogen.« less
The Inspiring Science Education project and the resources for HEP analysis by university students
NASA Astrophysics Data System (ADS)
Fassouliotis, Dimitris; Kourkoumelis, Christine; Vourakis, Stylianos
2016-11-01
The Inspiring Science Education outreach project has been running for more than two years, creating a large number of inquiry based educational resources for high-school teachers and students. Its goal is the promotion of science education in schools though new methods built on the inquiry based education techniques, involving large consortia of European partners and implementation of large-scale pilots in schools. Recent hands-on activities, developing and testing the above mentioned innovative applications are reviewed. In general, there is a lack for educational scenaria and laboratory courses earmarked for more advanced, namely university, students. At the University of Athens for the last four years, the HYPATIA on-line event analysis tool has been used as a lab course for fourth year undergraduate physics students, majoring in HEP. Up to now, the course was limited to visual inspection of a few tens of ATLAS events. Recently the course was enriched with additional analysis exercises, which involve large samples of events. The students through a user friendly interface can analyse the samples and optimize the cut selection in order to search for new physics. The implementation of this analysis is described.
Compound scale-up at the discovery-development interface.
Nikitenko, Antonia A
2006-11-01
As a result of an economically challenging environment within the pharmaceutical industry, pharmaceutical companies and their departments must increase productivity and cut costs to stay in line with the market. Discovery-led departments such as the medicinal chemistry and lead optimization groups focus on synthesizing large varieties of compounds in minimal amounts, while the chemical development groups must then deliver a few chosen leads employing an optimized synthesis method and using multi-kilogram quantities of material. A research group at the discovery-development interface has the task of medium-scale synthesis which is important in the lead selection stage. The primary objective of this group is the initial scale-up of promising leads for extensive physicochemical and biological testing. The challenge of the interface group involves overcoming synthetic issues within the rigid, accelerated timelines.
Periodic Hydraulic Testing for Discerning Fracture Network Connections
NASA Astrophysics Data System (ADS)
Becker, M.; Le Borgne, T.; Bour, O.; Guihéneuf, N.; Cole, M.
2015-12-01
Discrete fracture network (DFN) models often predict highly variable hydraulic connections between injection and pumping wells used for enhanced oil recovery, geothermal energy extraction, and groundwater remediation. Such connections can be difficult to verify in fractured rock systems because standard pumping or pulse interference tests interrogate too large a volume to pinpoint specific connections. Three field examples are presented in which periodic hydraulic tests were used to obtain information about hydraulic connectivity in fractured bedrock. The first site, a sandstone in New York State, involves only a single fracture at a scale of about 10 m. The second site, a granite in Brittany, France, involves a fracture network at about the same scale. The third site, a granite/schist in the U.S. State of New Hampshire, involves a complex network at scale of 30-60 m. In each case periodic testing provided an enhanced view of hydraulic connectivity over previous constant rate tests. Periodic testing is particularly adept at measuring hydraulic diffusivity, which is a more effective parameter than permeability for identify the complexity of flow pathways between measurement locations. Periodic tests were also conducted at multiple frequencies which provides a range in the radius of hydraulic penetration away from the oscillating well. By varying the radius of penetration, we attempt to interrogate the structure of the fracture network. Periodic tests, therefore, may be uniquely suited for verifying and/or calibrating DFN models.
NASA Astrophysics Data System (ADS)
Murray, A. Brad; Thieler, E. Robert
2004-02-01
Recent observations of inner continental shelves in many regions show numerous collections of relatively coarse sediment, which extend kilometers in the cross-shore direction and are on the order of 100 m wide. These "rippled scour depressions" have been interpreted to indicate concentrated cross-shelf currents. However, recent observations strongly suggest that they are associated with sediment transport along-shore rather than cross-shore. A new hypothesis for the origin of these features involves the large wave-generated ripples that form in the coarse material. Wave motions interacting with these large roughness elements generate near-bed turbulence that is greatly enhanced relative to that in other areas. This enhances entrainment and inhibits settling of fine material in an area dominated by coarse sediment. The fine sediment is then carried by mean currents past the coarse accumulations, and deposited where the bed is finer. We hypothesize that these interactions constitute a feedback tending to produce accumulations of fine material separated by self-perpetuating patches of coarse sediments. As with many types of self-organized bedforms, small features would interact as they migrate, leading to a better-organized, larger-scale pattern. As an initial test of this hypothesis, we use a numerical model treating the transport of coarse and fine sediment fractions, treated as functions of the local bed composition—a proxy for the presence of large roughness elements in coarse areas. Large-scale sorted patterns exhibiting the main characteristics of the natural features result robustly in the model, indicating that this new hypothesis offers a plausible explanation for the phenomena.
Honda, Michitaka; Wakita, Takafumi; Onishi, Yoshihiro; Nunobe, Souya; Miura, Akinori; Nishigori, Tatsuto; Kusanagi, Hiroshi; Yamamoto, Takatsugu; Boddy, Alexander; Fukuhara, Shunichi
2015-12-01
Patients who have undergone esophagectomy or gastrectomy have certain dietary limitations because of changes to the alimentary tract. This study attempted to develop a psychometric scale, named "Esophago-Gastric surgery and Quality of Dietary life (EGQ-D)," for assessment of impact of upper gastrointestinal surgery on diet-targeted quality of life. Using qualitative methods, the study team interviewed both patients and surgeons involved in esophagogastric cancer surgery, and we prepared an item pool and a draft scale. To evaluate the scale's psychometric reliability and validity, a survey involving a large number of patients was conducted. Items for the final scale were selected by factor analysis and item response theory. Cronbach's alpha was used for assessment of reliability, and correlations with the short form (SF)-12, esophagus and stomach surgery symptom scale (ES(4)), and nutritional indicators were analyzed to assess the criterion-related validity. Through multifaceted discussion and the pilot study, a draft questionnaire comprising 14 items was prepared, and a total of 316 patients were enrolled. On the basis of factor analysis and item response theory, six items were excluded, and the remaining eight items demonstrated strong unidimensionality for the final scale. Cronbach's alpha was 0.895. There were significant associations with all the subscale scores for SF-12, ES(4), and nutritional indicators. The EGQ-D scale has good contents and psychometric validity and can be used to evaluate disease-specific instrument to measure diet-targeted quality of life for postoperative patients with esophagogastric cancer.
Irizarry, Kristopher J L; Downs, Eileen; Bryden, Randall; Clark, Jory; Griggs, Lisa; Kopulos, Renee; Boettger, Cynthia M; Carr, Thomas J; Keeler, Calvin L; Collisson, Ellen; Drechsler, Yvonne
2017-01-01
Discovering genetic biomarkers associated with disease resistance and enhanced immunity is critical to developing advanced strategies for controlling viral and bacterial infections in different species. Macrophages, important cells of innate immunity, are directly involved in cellular interactions with pathogens, the release of cytokines activating other immune cells and antigen presentation to cells of the adaptive immune response. IFNγ is a potent activator of macrophages and increased production has been associated with disease resistance in several species. This study characterizes the molecular basis for dramatically different nitric oxide production and immune function between the B2 and the B19 haplotype chicken macrophages.A large-scale RNA sequencing approach was employed to sequence the RNA of purified macrophages from each haplotype group (B2 vs. B19) during differentiation and after stimulation. Our results demonstrate that a large number of genes exhibit divergent expression between B2 and B19 haplotype cells both prior and after stimulation. These differences in gene expression appear to be regulated by complex epigenetic mechanisms that need further investigation.
Global Neuromagnetic Cortical Fields Have Non-Zero Velocity
Alexander, David M.; Nikolaev, Andrey R.; Jurica, Peter; Zvyagintsev, Mikhail; Mathiak, Klaus; van Leeuwen, Cees
2016-01-01
Globally coherent patterns of phase can be obscured by analysis techniques that aggregate brain activity measures across-trials, whether prior to source localization or for estimating inter-areal coherence. We analyzed, at single-trial level, whole head MEG recorded during an observer-triggered apparent motion task. Episodes of globally coherent activity occurred in the delta, theta, alpha and beta bands of the signal in the form of large-scale waves, which propagated with a variety of velocities. Their mean speed at each frequency band was proportional to temporal frequency, giving a range of 0.06 to 4.0 m/s, from delta to beta. The wave peaks moved over the entire measurement array, during both ongoing activity and task-relevant intervals; direction of motion was more predictable during the latter. A large proportion of the cortical signal, measurable at the scalp, exists as large-scale coherent motion. We argue that the distribution of observable phase velocities in MEG is dominated by spatial filtering considerations in combination with group velocity of cortical activity. Traveling waves may index processes involved in global coordination of cortical activity. PMID:26953886
Drive-by large-region acoustic noise-source mapping via sparse beamforming tomography.
Tuna, Cagdas; Zhao, Shengkui; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2016-10-01
Environmental noise is a risk factor for human physical and mental health, demanding an efficient large-scale noise-monitoring scheme. The current technology, however, involves extensive sound pressure level (SPL) measurements at a dense grid of locations, making it impractical on a city-wide scale. This paper presents an alternative approach using a microphone array mounted on a moving vehicle to generate two-dimensional acoustic tomographic maps that yield the locations and SPLs of the noise-sources sparsely distributed in the neighborhood traveled by the vehicle. The far-field frequency-domain delay-and-sum beamforming output power values computed at multiple locations as the vehicle drives by are used as tomographic measurements. The proposed method is tested with acoustic data collected by driving an electric vehicle with a rooftop-mounted microphone array along a straight road next to a large open field, on which various pre-recorded noise-sources were produced by a loudspeaker at different locations. The accuracy of the tomographic imaging results demonstrates the promise of this approach for rapid, low-cost environmental noise-monitoring.
Bridging the scales in atmospheric composition simulations using a nudging technique
NASA Astrophysics Data System (ADS)
D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco
2010-05-01
Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.
NASA Astrophysics Data System (ADS)
Kovanen, Dori J.; Slaymaker, Olav
2008-07-01
Active debris flow fans in the North Cascade Foothills of Washington State constitute a natural hazard of importance to land managers, private property owners and personal security. In the absence of measurements of the sediment fluxes involved in debris flow events, a morphological-evolutionary systems approach, emphasizing stratigraphy, dating, fan morphology and debris flow basin morphometry, was used. Using the stratigraphic framework and 47 radiocarbon dates, frequency of occurrence and relative magnitudes of debris flow events have been estimated for three spatial scales of debris flow systems: the within-fan site scale (84 observations); the fan meso-scale (six observations) and the lumped fan, regional or macro-scale (one fan average and adjacent lake sediments). In order to characterize the morphometric framework, plots of basin area v. fan area, basin area v. fan gradient and the Melton ruggedness number v. fan gradient for the 12 debris flow basins were compared with those documented for semi-arid and paraglacial fans. Basin area to fan area ratios were generally consistent with the estimated level of debris flow activity during the Holocene as reported below. Terrain analysis of three of the most active debris flow basins revealed the variety of modes of slope failure and sediment production in the region. Micro-scale debris flow event systems indicated a range of recurrence intervals for large debris flows from 106-3645 years. The spatial variation of these rates across the fans was generally consistent with previously mapped hazard zones. At the fan meso-scale, the range of recurrence intervals for large debris flows was 273-1566 years and at the regional scale, the estimated recurrence interval of large debris flows was 874 years (with undetermined error bands) during the past 7290 years. Dated lake sediments from the adjacent Lake Whatcom gave recurrence intervals for large sediment producing events ranging from 481-557 years over the past 3900 years and clearly discernible sedimentation events in the lacustrine sediments had a recurrence interval of 67-78 years over that same period.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
Matsuura, Tomoaki; Tanimura, Naoki; Hosoda, Kazufumi; Yomo, Tetsuya; Shimizu, Yoshihiro
2017-01-01
To elucidate the dynamic features of a biologically relevant large-scale reaction network, we constructed a computational model of minimal protein synthesis consisting of 241 components and 968 reactions that synthesize the Met-Gly-Gly (MGG) peptide based on an Escherichia coli-based reconstituted in vitro protein synthesis system. We performed a simulation using parameters collected primarily from the literature and found that the rate of MGG peptide synthesis becomes nearly constant in minutes, thus achieving a steady state similar to experimental observations. In addition, concentration changes to 70% of the components, including intermediates, reached a plateau in a few minutes. However, the concentration change of each component exhibits several temporal plateaus, or a quasi-stationary state (QSS), before reaching the final plateau. To understand these complex dynamics, we focused on whether the components reached a QSS, mapped the arrangement of components in a QSS in the entire reaction network structure, and investigated time-dependent changes. We found that components in a QSS form clusters that grow over time but not in a linear fashion, and that this process involves the collapse and regrowth of clusters before the formation of a final large single cluster. These observations might commonly occur in other large-scale biological reaction networks. This developed analysis might be useful for understanding large-scale biological reactions by visualizing complex dynamics, thereby extracting the characteristics of the reaction network, including phase transitions. PMID:28167777
Evolution of neuronal signalling: transmitters and receptors.
Hoyle, Charles H V
2011-11-16
Evolution is a dynamic process during which the genome should not be regarded as a static entity. Molecular and morphological information yield insights into the evolution of species and their phylogenetic relationships, and molecular information in particular provides information into the evolution of signalling processes. Many signalling systems have their origin in primitive, even unicellular, organisms. Through time, and as organismal complexity increased, certain molecules were employed as intercellular signal molecules. In the autonomic nervous system the basic unit of chemical transmission is a ligand and its cognate receptor. The general mechanisms underlying evolution of signal molecules and their cognate receptors have their basis in the alteration of the genome. In the past this has occurred in large-scale events, represented by two or more doublings of the whole genome, or large segments of the genome, early in the deuterostome lineage, after the emergence of urochordates and cephalochordates, and before the emergence of vertebrates. These duplications were followed by extensive remodelling involving subsequent small-scale changes, ranging from point mutations to exon duplication. Concurrent with these processes was multiple gene loss so that the modern genome contains roughly the same number of genes as in early deuterostomes despite the large-scale genomic duplications. In this review, the principles that underlie evolution that have led to large and small families of autonomic neurotransmitters and their receptors are discussed, with emphasis on G protein-coupled receptors. Copyright © 2010 Elsevier B.V. All rights reserved.
White, Mark; Wells, John S G; Butterworth, Tony
2014-09-01
To examine the literature related to a large-scale quality improvement initiative, the 'Productive Ward: Releasing Time to Care', providing a bibliometric profile that tracks the level of interest and scale of roll-out and adoption, discussing the implications for sustainability. Productive Ward: Releasing Time to Care (aka Productive Ward) is probably one of the most ambitious quality improvement efforts engaged by the UK-NHS. Politically and financially supported, its main driver was the NHS Institute for Innovation and Improvement. The NHS institute closed in early 2013 leaving a void of resources, knowledge and expertise. UK roll-out of the initiative is well established and has arguably peaked. International interest in the initiative however continues to develop. A comprehensive literature review was undertaken to identify the literature related to the Productive Ward and its implementation (January 2006-June 2013). A bibliometric analysis examined/reviewed the trends and identified/measured interest, spread and uptake. Overall distribution patterns identify a declining trend of interest, with reduced numbers of grey literature and evaluation publications. However, detailed examination of the data shows no reduction in peer-reviewed outputs. There is some evidence that international uptake of the initiative continues to generate publications and create interest. Sustaining this initiative in the UK will require re-energising, a new focus and financing. The transition period created by the closure of its creator may well contribute to further reduced levels of interest and publication outputs in the UK. However, international implementation, evaluation and associated publications could serve to attract professional/academic interest in this well-established, positively reported, quality improvement initiative. This paper provides nurses and ward teams involved in quality improvement programmes with a detailed, current-state, examination and analysis of the Productive Ward literature, highlighting the bibliometric patterns of this large-scale, international, quality improvement programme. It serves to disseminate updated publication information to those in clinical practice who are involved in Productive Ward or a similar quality improvement initiative. © 2014 John Wiley & Sons Ltd.
Flume experimentation and simulation of bedrock channel processes
NASA Astrophysics Data System (ADS)
Thompson, Douglas; Wohl, Ellen
Flume experiments can provide cost effective, physically manageable miniature representations of complex bedrock channels. The inherent change in scale in such experiments requires a corresponding change in the scale of the forces represented in the flume system. Three modeling approaches have been developed that either ignore the scaling effects, utilize the change in scaled forces, or assume similarity of process between scales. An understanding of the nonlinear influence of a change in scale on all the forces involved is important to correctly analyze model results. Similarly, proper design and operation of flume experiments requires knowledge of the fundamental components of flume systems. Entrance and exit regions of the flume are used to provide good experimental conditions in the measurement region of the flume where data are collected. To insure reproducibility, large-scale turbulence must be removed in the head of the flume and velocity profiles must become fully developed in the entrance region. Water-surface slope and flow acceleration effects from downstream water-depth control must also be isolated in the exit region. Statistical design and development of representative channel substrate also influence model results in these systems. With proper experimental design, flumes may be used to investigate bedrock channel hydraulics, sediment-transport relations, and morphologic evolution. In particular, researchers have successfully used flume experiments to demonstrate the importance of turbulence and substrate characteristics in bedrock channel evolution. Turbulence often operates in a self perpetuating fashion, can erode bedrock walls with clear water and increase the mobility of sediment particles. Bedrock substrate influences channel evolution by offering varying resistance to erosion, controlling the location or type of incision and modifying the local influence of turbulence. An increased usage of scaled flume models may help to clarify the remaining uncertainties involving turbulence, channel substrate and bedrock channel evolution.
Local-Scale Air Quality Modeling in Support of Human Health and Exposure Research (Invited)
NASA Astrophysics Data System (ADS)
Isakov, V.
2010-12-01
Spatially- and temporally-sparse information on air quality is a key concern for air-pollution-related environmental health studies. Monitor networks are sparse in both space and time, are costly to maintain, and are often designed purposely to avoid detecting highly localized sources. Recent studies have shown that more narrowly defining the geographic domain of the study populations and improvements in the measured/estimated ambient concentrations can lead to stronger associations between air pollution and hospital admissions and mortality records. Traditionally, ambient air quality measurements have been used as a primary input to support human health and exposure research. However, there is increasing evidence that the current ambient monitoring network is not capturing sharp gradients in exposure due to the presence of high concentration levels near, for example, major roadways. Many air pollutants exhibit large concentration gradients near large emitters such as major roadways, factories, ports, etc. To overcome these limitations, researchers are now beginning to use air quality models to support air pollution exposure and health studies. There are many advantages to using air quality models over traditional approaches based on existing ambient measurements alone. First, models can provide spatially- and temporally-resolved concentrations as direct input to exposure and health studies and thus better defining the concentration levels for the population in the geographic domain. Air quality models have a long history of use in air pollution regulations, and supported by regulatory agencies and a large user community. Also, models can provide bidirectional linkages between sources of emissions and ambient concentrations, thus allowing exploration of various mitigation strategies to reduce risk to exposure. In order to provide best estimates of air concentrations to support human health and exposure studies, model estimates should consider local-scale features, regional-scale transport, and photochemical transformations. Since these needs are currently not met by a single model, hybrid air quality modeling has recently been developed to combine these capabilities. In this paper, we present the results of two studies where we applied the hybrid modeling approach to provide spatial and temporal details in air quality concentrations to support exposure and health studies: a) an urban-scale air quality accountability study involving near-source exposures to multiple ambient air pollutants, and b) an urban-scale epidemiological study involving human health data based on emergency department visits.
Kinetic Simulations of the Interruption of Large-Amplitude Shear-Alfvén Waves in a High- β Plasma
Squire, J.; Kunz, M. W.; Quataert, E.; ...
2017-10-12
Using two-dimensional hybrid-kinetic simulations, we explore the nonlinear “interruption” of standing and traveling shear-Alfvén waves in collisionless plasmas. Interruption involves a self-generated pressure anisotropy removing the restoring force of a linearly polarized Alfvénic perturbation, and occurs for wave amplitudes δB ⊥/B 0≳β –1/2 (where β is the ratio of thermal to magnetic pressure). We use highly elongated domains to obtain maximal scale separation between the wave and the ion gyroscale. For standing waves above the amplitude limit, we find that the large-scale magnetic field of the wave decays rapidly. The dynamics are strongly affected by the excitation of oblique firehosemore » modes, which transition into long-lived parallel fluctuations at the ion gyroscale and cause significant particle scattering. Traveling waves are damped more slowly, but are also influenced by small-scale parallel fluctuations created by the decay of firehose modes. Our results demonstrate that collisionless plasmas cannot support linearly polarized Alfvén waves above δB ⊥/B 0~β –1/2. Here, they also provide a vivid illustration of two key aspects of low-collisionality plasma dynamics: (i) the importance of velocity-space instabilities in regulating plasma dynamics at high β, and (ii) how nonlinear collisionless processes can transfer mechanical energy directly from the largest scales into thermal energy and microscale fluctuations, without the need for a scale-by-scale turbulent cascade.« less
Kinetic Simulations of the Interruption of Large-Amplitude Shear-Alfvén Waves in a High- β Plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Squire, J.; Kunz, M. W.; Quataert, E.
Using two-dimensional hybrid-kinetic simulations, we explore the nonlinear “interruption” of standing and traveling shear-Alfvén waves in collisionless plasmas. Interruption involves a self-generated pressure anisotropy removing the restoring force of a linearly polarized Alfvénic perturbation, and occurs for wave amplitudes δB ⊥/B 0≳β –1/2 (where β is the ratio of thermal to magnetic pressure). We use highly elongated domains to obtain maximal scale separation between the wave and the ion gyroscale. For standing waves above the amplitude limit, we find that the large-scale magnetic field of the wave decays rapidly. The dynamics are strongly affected by the excitation of oblique firehosemore » modes, which transition into long-lived parallel fluctuations at the ion gyroscale and cause significant particle scattering. Traveling waves are damped more slowly, but are also influenced by small-scale parallel fluctuations created by the decay of firehose modes. Our results demonstrate that collisionless plasmas cannot support linearly polarized Alfvén waves above δB ⊥/B 0~β –1/2. Here, they also provide a vivid illustration of two key aspects of low-collisionality plasma dynamics: (i) the importance of velocity-space instabilities in regulating plasma dynamics at high β, and (ii) how nonlinear collisionless processes can transfer mechanical energy directly from the largest scales into thermal energy and microscale fluctuations, without the need for a scale-by-scale turbulent cascade.« less
Mining large heterogeneous data sets in drug discovery.
Wild, David J
2009-10-01
Increasingly, effective drug discovery involves the searching and data mining of large volumes of information from many sources covering the domains of chemistry, biology and pharmacology amongst others. This has led to a proliferation of databases and data sources relevant to drug discovery. This paper provides a review of the publicly-available large-scale databases relevant to drug discovery, describes the kinds of data mining approaches that can be applied to them and discusses recent work in integrative data mining that looks for associations that pan multiple sources, including the use of Semantic Web techniques. The future of mining large data sets for drug discovery requires intelligent, semantic aggregation of information from all of the data sources described in this review, along with the application of advanced methods such as intelligent agents and inference engines in client applications.
Shimizu, Seiji; Kobayashi, Taku; Tomioka, Hideo; Ohtsu, Kensei; Matsui, Toshiyuki; Hibi, Toshifumi
2017-03-01
Mesenteric phlebosclerosis (MP) is a rare disease characterized by venous calcification extending from the colonic wall to the mesentery, with chronic ischemic changes from venous return impairment in the intestine. It is an idiopathic disease, but increasing attention has been paid to the potential involvement of herbal medicine, or Kampo, in its etiology. Until now, there were scattered case reports, but no large-scale studies have been conducted to unravel the clinical characteristics and etiology of the disease. A nationwide survey was conducted using questionnaires to assess possible etiology (particularly the involvement of herbal medicine), clinical manifestations, disease course, and treatment of MP. Data from 222 patients were collected. Among the 169 patients (76.1 %), whose history of herbal medicine was obtained, 147 (87.0 %) used herbal medicines. The use of herbal medicines containing sanshishi (gardenia fruit, Gardenia jasminoides Ellis) was reported in 119 out of 147 patients (81.0 %). Therefore, the use of herbal medicine containing sanshishi was confirmed in 70.4 % of 169 patients whose history of herbal medicine was obtained. The duration of sanshishi use ranged from 3 to 51 years (mean 13.6 years). Patients who discontinued sanshishi showed a better outcome compared with those who continued it. The use of herbal medicine containing sanshishi is associated with the etiology of MP. Although it may not be the causative factor, it is necessary for gastroenterologists to be aware of the potential risk of herbal medicine containing sanshishi for the development of MP.
Cho, Sang Soo; Yoon, Eun Jin; Bang, Sung Ae; Park, Hyun Soo; Kim, Yu Kyeong; Strafella, Antonio P; Kim, Sang Eun
2012-09-01
To better understand the functional role of cerebellum within the large-scale cerebellocerebral neural network, we investigated the changes of neuronal activity elicited by cerebellar repetitive transcranial magnetic stimulation (rTMS) using (18)F-fluorodeoxyglucose (FDG) and positron emission tomography (PET). Twelve right-handed healthy volunteers were studied with brain FDG PET under two conditions: active rTMS of 1 Hz frequency over the left lateral cerebellum and sham stimulation. Compared to the sham condition, active rTMS induced decreased glucose metabolism in the stimulated left lateral cerebellum, the areas known to be involved in voluntary motor movement (supplementary motor area and posterior parietal cortex) in the right cerebral hemisphere, and the areas known to be involved in cognition and emotion (orbitofrontal, medial frontal, and anterior cingulate gyri) in the left cerebral hemisphere. Increased metabolism was found in cognition- and language-related brain regions such as the left inferior frontal gyrus including Broca's area, bilateral superior temporal gyri including Wernicke's area, and bilateral middle temporal gyri. Left cerebellar rTMS also led to increased metabolism in the left cerebellar dentate nucleus and pons. These results demonstrate that rTMS over the left lateral cerebellum modulates not only the target region excitability but also excitability of remote, but interconnected, motor-, language-, cognition-, and emotion-related cerebral regions. They provide further evidence that the cerebellum is involved not only in motor-related functions but also in higher cognitive abilities and emotion through the large-scale cerebellocereberal neural network.
Neurolinguistic Approach to Natural Language Processing with Applications to Medical Text Analysis
Matykiewicz, Paweł; Pestian, John
2008-01-01
Understanding written or spoken language presumably involves spreading neural activation in the brain. This process may be approximated by spreading activation in semantic networks, providing enhanced representations that involve concepts that are not found directly in the text. Approximation of this process is of great practical and theoretical interest. Although activations of neural circuits involved in representation of words rapidly change in time snapshots of these activations spreading through associative networks may be captured in a vector model. Concepts of similar type activate larger clusters of neurons, priming areas in the left and right hemisphere. Analysis of recent brain imaging experiments shows the importance of the right hemisphere non-verbal clusterization. Medical ontologies enable development of a large-scale practical algorithm to re-create pathways of spreading neural activations. First concepts of specific semantic type are identified in the text, and then all related concepts of the same type are added to the text, providing expanded representations. To avoid rapid growth of the extended feature space after each step only the most useful features that increase document clusterization are retained. Short hospital discharge summaries are used to illustrate how this process works on a real, very noisy data. Expanded texts show significantly improved clustering and may be classified with much higher accuracy. Although better approximations to the spreading of neural activations may be devised a practical approach presented in this paper helps to discover pathways used by the brain to process specific concepts, and may be used in large-scale applications. PMID:18614334
Frontotemporal neural systems supporting semantic processing in Alzheimer's disease.
Peelle, Jonathan E; Powers, John; Cook, Philip A; Smith, Edward E; Grossman, Murray
2014-03-01
We hypothesized that semantic memory for object concepts involves both representations of visual feature knowledge in modality-specific association cortex and heteromodal regions that are important for integrating and organizing this semantic knowledge so that it can be used in a flexible, contextually appropriate manner. We examined this hypothesis in an fMRI study of mild Alzheimer's disease (AD). Participants were presented with pairs of printed words and asked whether the words matched on a given visual-perceptual feature (e.g., guitar, violin: SHAPE). The stimuli probed natural kinds and manufactured objects, and the judgments involved shape or color. We found activation of bilateral ventral temporal cortex and left dorsolateral prefrontal cortex during semantic judgments, with AD patients showing less activation of these regions than healthy seniors. Moreover, AD patients showed less ventral temporal activation than did healthy seniors for manufactured objects, but not for natural kinds. We also used diffusion-weighted MRI of white matter to examine fractional anisotropy (FA). Patients with AD showed significantly reduced FA in the superior longitudinal fasciculus and inferior frontal-occipital fasciculus, which carry projections linking temporal and frontal regions of this semantic network. Our results are consistent with the hypothesis that semantic memory is supported in part by a large-scale neural network involving modality-specific association cortex, heteromodal association cortex, and projections between these regions. The semantic deficit in AD thus arises from gray matter disease that affects the representation of feature knowledge and processing its content, as well as white matter disease that interrupts the integrated functioning of this large-scale network.
Primordial Magnetic Field Effects on the CMB and Large-Scale Structure
Yamazaki, Dai G.; Ichiki, Kiyotomo; Kajino, Toshitaka; ...
2010-01-01
Mmore » agnetic fields are everywhere in nature, and they play an important role in every astronomical environment which involves the formation of plasma and currents. It is natural therefore to suppose that magnetic fields could be present in the turbulent high-temperature environment of the big bang. Such a primordial magnetic field (PF) would be expected to manifest itself in the cosmic microwave background (CB) temperature and polarization anisotropies, and also in the formation of large-scale structure. In this paper, we summarize the theoretical framework which we have developed to calculate the PF power spectrum to high precision. Using this formulation, we summarize calculations of the effects of a PF which take accurate quantitative account of the time evolution of the cutoff scale. We review the constructed numerical program, which is without approximation, and an improvement over the approach used in a number of previous works for studying the effect of the PF on the cosmological perturbations. We demonstrate how the PF is an important cosmological physical process on small scales. We also summarize the current constraints on the PF amplitude B λ and the power spectral index n B which have been deduced from the available CB observational data by using our computational framework.« less
Leaf-to-branch scaling of C-gain in field-grown almond trees under different soil moisture regimes.
Egea, Gregorio; González-Real, María M; Martin-Gorriz, Bernardo; Baille, Alain
2014-06-01
Branch/tree-level measurements of carbon (C)-acquisition provide an integration of the physical and biological processes driving the C gain of all individual leaves. Most research dealing with the interacting effects of high-irradiance environments and soil-induced water stress on the C-gain of fruit tree species has focused on leaf-level measurements. The C-gain of both sun-exposed leaves and branches of adult almond trees growing in a semi-arid climate was investigated to determine the respective costs of structural and biochemical/physiological protective mechanisms involved in the behaviour at branch scale. Measurements were performed on well-watered (fully irrigated, FI) and drought-stressed (deficit irrigated, DI) trees. Leaf-to-branch scaling for net CO2 assimilation was quantified by a global scaling factor (fg), defined as the product of two specific scaling factors: (i) a structural scaling factor (fs), determined under well-watered conditions, mainly involving leaf mutual shading; and (ii) a water stress scaling factor (fws,b) involving the limitations in C-acquisition due to soil water deficit. The contribution of structural mechanisms to limiting branch net C-gain was high (mean fs ∼0.33) and close to the projected-to-total leaf area ratio of almond branches (ε = 0.31), while the contribution of water stress mechanisms was moderate (mean fws,b ∼0.85), thus supplying an fg ranging between 0.25 and 0.33 with slightly higher values for FI trees with respect to DI trees. These results suggest that the almond tree (a drought-tolerant species) has acquired mechanisms of defensive strategy (survival) mainly based on a specific branch architectural design. This strategy allows the potential for C-gain to be preserved at branch scale under a large range of soil water deficits. In other words, almond tree branches exhibit an architecture that is suboptimal for C-acquisition under well-watered conditions, but remarkably efficient to counteract the impact of DI and drought events. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Late time cosmological phase transitions 1: Particle physics models and cosmic evolution
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Hill, Christopher T.; Watkins, Richard
1991-01-01
We described a natural particle physics basis for late-time phase transitions in the universe. Such a transition can seed the formation of large-scale structure while leaving a minimal imprint upon the microwave background anisotropy. The key ingredient is an ultra-light pseudo-Nambu-Goldstone boson with an astronomically large (O(kpc-Mpc)) Compton wavelength. We analyze the cosmological signatures of and constraints upon a wide class of scenarios which do not involve domain walls. In addition to seeding structure, coherent ultra-light bosons may also provide unclustered dark matter in a spatially flat universe, omega sub phi approx. = 1.
NASA Out-of-Autoclave Process Technology Development
NASA Technical Reports Server (NTRS)
Johnston, Norman, J.; Clinton, R. G., Jr.; McMahon, William M.
2000-01-01
Polymer matrix composites (PMCS) will play a significant role in the construction of large reusable launch vehicles (RLVs), mankind's future major access to low earth orbit and the international space station. PMCs are lightweight and offer attractive economies of scale and automated fabrication methodology. Fabrication of large RLV structures will require non-autoclave methods which have yet to be matured including (1) thermoplastic forming: heated head robotic tape placement, sheet extrusion, pultrusion, molding and forming; (2) electron beam curing: bulk and ply-by-ply automated placement; (3) RTM and VARTM. Research sponsored by NASA in industrial and NASA laboratories on automated placement techniques involving the first 2 categories will be presented.
NASA Astrophysics Data System (ADS)
Wang, Meng; Shi, Yang; Noelle, Daniel J.; Le, Anh V.; Qiao, Yu
2017-10-01
In a lithium-ion battery (LIB), mechanical abuse often leads to internal short circuits (ISC) that trigger thermal runaway. We investigated a thermal-runaway mitigation (TRM) technique using a modified current collector. By generating surface grooves on the current collector, the area of electrodes directly involved in ISC could be largely reduced, which decreased the ISC current. The TRM mechanism took effect immediately after the LIB was damaged. The testing data indicate that the groove width is a critical factor. With optimized groove width, this technique may enable robust and multifunctional design of LIB cells for large-scale energy-storage units.
Internal constitution and evolution of the moon.
NASA Technical Reports Server (NTRS)
Solomon, S. C.; Toksoz, M. N.
1973-01-01
The composition, structure and evolution of the moon's interior are narrowly constrained by a large assortment of physical and chemical data. Models of the thermal evolution of the moon that fit the chronology of igneous activity on the lunar surface, the stress history of the lunar lithosphere implied by the presence of mascons, and the surface concentrations of radioactive elements, involve extensive differentiation early in lunar history. This differentiation may be the result of rapid accretion and large-scale melting or of primary chemical layering during accretion; differences in present-day temperatures for these two possibilities are significant only in the inner 1000 km of the moon and may not be resolvable.
Proceedings of the DICE THROW Symposium 21-23 June 1977. Volume 1
1977-07-01
different scaled ANFO events to insure yield scalability. Phase 1 of the program consisted of a series of one-pound events to examine cratering and...characterization of a 500-ton-equivalent event. A large number of agencies were involved in different facets of the development program. Probably most...charge geometry observed in the 1000-pound series, supported the observations from the Phase 1 program. Differences were observed in the fireball
Merging Surface Reconstructions of Terrestrial and Airborne LIDAR Range Data
2009-05-19
Mangan and R. Whitaker. Partitioning 3D surface meshes using watershed segmentation . IEEE Trans. on Visualization and Computer Graphics, 5(4), pp...Jain, and A. Zakhor. Data Processing Algorithms for Generating Textured 3D Building Facade Meshes from Laser Scans and Camera Images. International...acquired set of overlapping range images into a single mesh [2,9,10]. However, due to the volume of data involved in large scale urban modeling, data
Large-scale isolation and fractionation of organs of Drosophila melanogaster larvae.
Zweidler, A; Cohen, L H
1971-10-01
Methods for the mass isolation of diverse organs from small animals are described. They involve novel devices: a mechanical dissecting system, a centrifugal agitator for the separation of fibrillar from globular particles, and a settling chamber for the fractionation at unit gravity of particles with sedimentation velocities above the useful range for centrifugation. The application of these methods to the isolation of polytene and nonpolytene nuclei from Drosophila melanogaster larvae is described.
Background | Office of Cancer Clinical Proteomics Research
The term "proteomics" refers to a large-scale comprehensive study of a specific proteome resulting from its genome, including abundances of proteins, their variations and modifications, and interacting partners and networks in order to understand cellular processes involved. Similarly, “Cancer proteomics” refers to comprehensive analyses of proteins and their derivatives translated from a specific cancer genome using a human biospecimen or a preclinical model (e.g., cultured cell or animal model).
A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows
NASA Technical Reports Server (NTRS)
Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert
1996-01-01
The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.
Lee waves: Benign and malignant
NASA Technical Reports Server (NTRS)
Wurtele, M. G.; Datta, A.; Sharman, R. D.
1993-01-01
The flow of an incompressible fluid over an obstacle will produce an oscillation in which buoyancy is the restoring force, called a gravity wave. For disturbances of this scale, the atmosphere may be treated as dynamically incompressible, even though there exists a mean static upward density gradient. Even in the linear approximation - i.e., for small disturbances - this model explains a great many of the flow phenomena observed in the lee of mountains. However, nonlinearities do arise importantly, in three ways: (1) through amplification due to the decrease of mean density with height; (2) through the large (scaled) size of the obstacle, such as a mountain range; and (3) from dynamically singular levels in the fluid field. These effects produce a complicated array of phenomena - large departure of the streamlines from their equilibrium levels, high winds, generation of small scales, turbulence, etc. - that present hazards to aircraft and to lee surface areas. The nonlinear disturbances also interact with the larger-scale flow in such a manner as to impact global weather forecasts and the climatological momentum balance. If there is no dynamic barrier, these waves can penetrate vertically into the middle atmosphere (30-100 km), where recent observations show them to be of a length scale that must involve the coriolis force in any modeling. At these altitudes, the amplitude of the waves is very large, and the phenomena associated with these wave dynamics are being studied with a view to their potential impact on high performance aircraft, including the projected National Aerospace Plane (NASP). The presentation shows the results of analysis and of state-of-the-art numerical simulations, validated where possible by observational data, and illustrated with photographs from nature.
Spin determination at the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Yavin, Itay
The quantum field theory describing the Electroweak sector demands some new physics at the TeV scale in order to unitarize the scattering of longitudinal W bosons. If this new physics takes the form of a scalar Higgs boson then it is hard to understand the huge hierarchy of scales between the Electroweak scale ˜ TeV and the Planck scale ˜ 1019 GeV. This is known as the Naturalness problem. Normally, in order to solve this problem, new particles, in addition to the Higgs boson, are required to be present in the spectrum below a few TeV. If such particles are indeed discovered at the Large Hadron Collider it will become important to determine their spin. Several classes of models for physics beyond the Electroweak scale exist. Determining the spin of any such newly discovered particle could prove to be the only means of distinguishing between these different models. In the first part of this thesis; we present a thorough discussion regarding such a measurement. We survey the different potentially useful channels for spin determination and a detailed analysis of the most promising channel is performed. The Littlest Higgs model offers a way to solve the Hierarchy problem by introduring heavy partners to Standard Model particles with the same spin and quantum numbers. However, this model is only good up to ˜ 10 TeV. In the second part of this thesis we present an extension of this model into a strongly coupled theory above ˜ 10 TeV. We use the celebrated AdS/CFT correspondence to calculate properties of the low-energy physics in terms of high-energy parameters. We comment on some of the tensions inherent to such a construction involving a large-N CFT (or equivalently, an AdS space).
Gut Microbiota Dynamics during Dietary Shift in Eastern African Cichlid Fishes
Baldo, Laura; Riera, Joan Lluís; Tooming-Klunderud, Ave; Albà, M. Mar; Salzburger, Walter
2015-01-01
The gut microbiota structure reflects both a host phylogenetic history and a signature of adaptation to the host ecological, mainly trophic niches. African cichlid fishes, with their array of closely related species that underwent a rapid dietary niche radiation, offer a particularly interesting system to explore the relative contribution of these two factors in nature. Here we surveyed the host intra- and interspecific natural variation of the gut microbiota of five cichlid species from the monophyletic tribe Perissodini of lake Tanganyika, whose members transitioned from being zooplanktivorous to feeding primarily on fish scales. The outgroup riverine species Astatotilapia burtoni, largely omnivorous, was also included in the study. Fusobacteria, Firmicutes and Proteobacteria represented the dominant components in the gut microbiota of all 30 specimens analysed according to two distinct 16S rRNA markers. All members of the Perissodini tribe showed a homogenous pattern of microbial alpha and beta diversities, with no significant qualitative differences, despite changes in diet. The recent diet shift between zooplantkon- and scale-eaters simply reflects on a significant enrichment of Clostridium taxa in scale-eaters where they might be involved in the scale metabolism. Comparison with the omnivorous species A. burtoni suggests that, with increased host phylogenetic distance and/or increasing herbivory, the gut microbiota begins differentiating also at qualitative level. The cichlids show presence of a large conserved core of taxa and a small set of core OTUs (average 13–15%), remarkably stable also in captivity, and putatively favoured by both restricted microbial transmission among related hosts (putatively enhanced by mouthbrooding behavior) and common host constraints. This study sets the basis for a future large-scale investigation of the gut microbiota of cichlids and its adaptation in the process of the host adaptive radiation. PMID:25978452
Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques
Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695
Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.
Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.
Horgan, John; Shortland, Neil; Abbasciano, Suzzette; Walsh, Shaun
2016-09-01
Involvement in terrorism has traditionally been discussed in relatively simplistic ways with little effort spent on developing a deeper understanding of what involvement actually entails, and how it differs from person to person. In this paper, we present the results of a three-year project focused on 183 individuals associated with the global jihadist movement who were convicted in the United States, for terrorist offenses, between 1995 and 2012. These data were developed by a large-scale, open-source data collection activity that involved a coding dictionary of more than 120 variables. We identify and explore the diversity of behaviors that constitute involvement in terrorism. We also compare lone actors and those who acted as part of a group, finding that lone actors differed from group-based actors in key demographic attributes and were more likely to be involved in attack execution behaviors. Implications for counterterrorism are then discussed. © 2016 American Academy of Forensic Sciences.
Methods and apparatus of analyzing electrical power grid data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.
Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less
A cooperative strategy for parameter estimation in large scale systems biology models.
Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R
2012-06-22
Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112
Deliano, Matthias; Scheich, Henning; Ohl, Frank W
2009-12-16
Several studies have shown that animals can learn to make specific use of intracortical microstimulation (ICMS) of sensory cortex within behavioral tasks. Here, we investigate how the focal, artificial activation by ICMS leads to a meaningful, behaviorally interpretable signal. In natural learning, this involves large-scale activity patterns in widespread brain-networks. We therefore trained gerbils to discriminate closely neighboring ICMS sites within primary auditory cortex producing evoked responses largely overlapping in space. In parallel, during training, we recorded electrocorticograms (ECoGs) at high spatial resolution. Applying a multivariate classification procedure, we identified late spatial patterns that emerged with discrimination learning from the ongoing poststimulus ECoG. These patterns contained information about the preceding conditioned stimulus, and were associated with a subsequent correct behavioral response by the animal. Thereby, relevant pattern information was mainly carried by neuron populations outside the range of the lateral spatial spread of ICMS-evoked cortical activation (approximately 1.2 mm). This demonstrates that the stimulated cortical area not only encoded information about the stimulation sites by its focal, stimulus-driven activation, but also provided meaningful signals in its ongoing activity related to the interpretation of ICMS learned by the animal. This involved the stimulated area as a whole, and apparently required large-scale integration in the brain. However, ICMS locally interfered with the ongoing cortical dynamics by suppressing pattern formation near the stimulation sites. The interaction between ICMS and ongoing cortical activity has several implications for the design of ICMS protocols and cortical neuroprostheses, since the meaningful interpretation of ICMS depends on this interaction.
NASA Astrophysics Data System (ADS)
Fourel, Loïc; Limare, Angela; Jaupart, Claude; Surducan, Emanoil; Farnetani, Cinzia G.; Kaminski, Edouard C.; Neamtu, Camelia; Surducan, Vasile
2017-08-01
Convective motions in silicate planets are largely driven by internal heat sources and secular cooling. The exact amount and distribution of heat sources in the Earth are poorly constrained and the latter is likely to change with time due to mixing and to the deformation of boundaries that separate different reservoirs. To improve our understanding of planetary-scale convection in these conditions, we have designed a new laboratory setup allowing a large range of heat source distributions. We illustrate the potential of our new technique with a study of an initially stratified fluid involving two layers with different physical properties and internal heat production rates. A modified microwave oven is used to generate a uniform radiation propagating through the fluids. Experimental fluids are solutions of hydroxyethyl cellulose and salt in water, such that salt increases both the density and the volumetric heating rate. We determine temperature and composition fields in 3D with non-invasive techniques. Two fluorescent dyes are used to determine temperature. A Nd:YAG planar laser beam excites fluorescence, and an optical system, involving a beam splitter and a set of colour filters, captures the fluorescence intensity distribution on two separate spectral bands. The ratio between the two intensities provides an instantaneous determination of temperature with an uncertainty of 5% (typically 1K). We quantify mixing processes by precisely tracking the interfaces separating the two fluids. These novel techniques allow new insights on the generation, morphology and evolution of large-scale heterogeneities in the Earth's lower mantle.
Cossette, Sylvie; Cara, Chantal; Ricard, Nicole; Pepin, Jacinthe
2005-08-01
While there is a large body of literature regarding caring in nursing and some measurement tools addressing the concept have been developed, limitations of existing instruments constrain theory-driven research on nurse-patient interactions. The purpose of this paper is to describe the development and initial psychometric evaluation of the Caring Nurse-Patient Interactions Scale in a sample of 332 nurses and nursing students. The tool intended to facilitate research on the links between caring and patient outcomes. A content validity approach involving 13 expert nurses resulted in a 70-item tool sub-divided into 10 nursing carative factors. Alpha coefficients between sub-scales varied from .73 to .91 and sub-scales inter-correlations ranged from .53 to .89. Pearson correlation coefficients ranged from --.02 to .32 between the sub-scales and social desirability suggesting low to moderate bias. Results of the contrasted group approach partially supported the hypotheses while all differences were in the expected direction. Results suggest that the scale has strong potential for use in research, clinical and educational settings.
NASA Astrophysics Data System (ADS)
Ji, H.; Bhattacharjee, A.; Prager, S.; Daughton, W. S.; Bale, S. D.; Carter, T. A.; Crocker, N.; Drake, J. F.; Egedal, J.; Sarff, J.; Wallace, J.; Chen, Y.; Cutler, R.; Fox, W. R., II; Heitzenroeder, P.; Kalish, M.; Jara-Almonte, J.; Myers, C. E.; Ren, Y.; Yamada, M.; Yoo, J.
2015-12-01
The FLARE device (flare.pppl.gov) is a new intermediate-scale plasma experiment under construction at Princeton to study magnetic reconnection in regimes directly relevant to space, solar and astrophysical plasmas. The existing small-scale experiments have been focusing on the single X-line reconnection process either with small effective sizes or at low Lundquist numbers, but both of which are typically very large in natural plasmas. The configuration of the FLARE device is designed to provide experimental access to the new regimes involving multiple X-lines, as guided by a reconnection "phase diagram" [Ji & Daughton, PoP (2011)]. Most of major components of the FLARE device have been designed and are under construction. The device will be assembled and installed in 2016, followed by commissioning and operation in 2017. The planned research on FLARE as a user facility will be discussed on topics including the multiple scale nature of magnetic reconnection from global fluid scales to ion and electron kinetic scales. Results from scoping simulations based on particle and fluid codes and possible comparative research with space measurements will be presented.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Hudson, Judith N; Farmer, Elizabeth A; Weston, Kathryn M; Bushnell, John A
2015-01-16
Particularly when undertaken on a large scale, implementing innovation in higher education poses many challenges. Sustaining the innovation requires early adoption of a coherent implementation strategy. Using an example from clinical education, this article describes a process used to implement a large-scale innovation with the intent of achieving sustainability. Desire to improve the effectiveness of undergraduate medical education has led to growing support for a longitudinal integrated clerkship (LIC) model. This involves a move away from the traditional clerkship of 'block rotations' with frequent changes in disciplines, to a focus upon clerkships with longer duration and opportunity for students to build sustained relationships with supervisors, mentors, colleagues and patients. A growing number of medical schools have adopted the LIC model for a small percentage of their students. At a time when increasing medical school numbers and class sizes are leading to competition for clinical supervisors it is however a daunting challenge to provide a longitudinal clerkship for an entire medical school class. This challenge is presented to illustrate the strategy used to implement sustainable large scale innovation. A strategy to implement and build a sustainable longitudinal integrated community-based clerkship experience for all students was derived from a framework arising from Roberto and Levesque's research in business. The framework's four core processes: chartering, learning, mobilising and realigning, provided guidance in preparing and rolling out the 'whole of class' innovation. Roberto and Levesque's framework proved useful for identifying the foundations of the implementation strategy, with special emphasis on the relationship building required to implement such an ambitious initiative. Although this was innovation in a new School it required change within the school, wider university and health community. Challenges encountered included some resistance to moving away from traditional hospital-centred education, initial student concern, resource limitations, workforce shortage and potential burnout of the innovators. Large-scale innovations in medical education may productively draw upon research from other disciplines for guidance on how to lay the foundations for successfully achieving sustainability.
Bolland, Daniel J; Wood, Andrew L; Corcoran, Anne E
2009-01-01
V(D)J recombination in lymphocytes is the cutting and pasting together of antigen receptor genes in cis to generate the enormous variety of coding sequences required to produce diverse antigen receptor proteins. It is the key role of the adaptive immune response, which must potentially combat millions of different foreign antigens. Most antigen receptor loci have evolved to be extremely large and contain multiple individual V, D and J genes. The immunoglobulin heavy chain (Igh) and immunoglobulin kappa light chain (Igk) loci are the largest multigene loci in the mammalian genome and V(D)J recombination is one of the most complicated genetic processes in the nucleus. The challenge for the appropriate lymphocyte is one of macro-management-to make all of the antigen receptor genes in a particular locus available for recombination at the appropriate developmental time-point. Conversely, these large loci must be kept closed in lymphocytes in which they do not normally recombine, to guard against genomic instability generated by the DNA double strand breaks inherent to the V(D)J recombination process. To manage all of these demanding criteria, V(D)J recombination is regulated at numerous levels. It is restricted to lymphocytes since the Rag genes which control the DNA double-strand break step of recombination are only expressed in these cells. Within the lymphocyte lineage, immunoglobulin recombination is restricted to B-lymphocytes and TCR recombination to T-lymphocytes by regulation of locus accessibility, which occurs at multiple levels. Accessibility of recombination signal sequences (RSSs) flanking individual V, D and J genes at the nucleosomal level is the key micro-management mechanism, which is discussed in greater detail in other chapters. This chapter will explore how the antigen receptor loci are regulated as a whole, focussing on the Igh locus as a paradigm for the mechanisms involved. Numerous recent studies have begun to unravel the complex and complementary processes involved in this large-scale locus organisation. We will examine the structure of the Igh locus and the large-scale and higher-order chromatin remodelling processes associated with V(D)J recombination, at the level of the locus itself, its conformational changes and its dynamic localisation within the nucleus.
Challenges in enzymatic route of mannitol production.
Bhatt, Sheelendra Mangal; Mohan, Anand; Srivastava, Suresh Kumar
2013-01-01
Mannitol is an important biochemical often used as medicine and in food sector, yet its biotechnological is not preffered in Industry for large scale production, which may be due to the multistep mechanism involved in hydrogenation and reduction. This paper is a comparative preview covering present chemical and biotechnological approaches existing today for mannitol production at industrial scale. Biotechnological routes are suitable for adaptation at industrial level for mannitol production, and whatever concerns are there had been discussed in detail, namely, raw materials, broad range of enzymes with high activity at elevated temperature suitable for use in reactor, cofactor limitation, reduced by-product formation, end product inhibition, and reduced utilization of mannitol for enhancing the yield with maximum volumetric productivity.
NASA/FAA general aviation crash dynamics program
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Hayduk, R. J.; Carden, H. D.
1981-01-01
The program involves controlled full scale crash testing, nonlinear structural analyses to predict large deflection elastoplastic response, and load attenuating concepts for use in improved seat and subfloor structure. Both analytical and experimental methods are used to develop expertise in these areas. Analyses include simplified procedures for estimating energy dissipating capabilities and comprehensive computerized procedures for predicting airframe response. These analyses are developed to provide designers with methods for predicting accelerations, loads, and displacements on collapsing structure. Tests on typical full scale aircraft and on full and subscale structural components are performed to verify the analyses and to demonstrate load attenuating concepts. A special apparatus was built to test emergency locator transmitters when attached to representative aircraft structure. The apparatus is shown to provide a good simulation of the longitudinal crash pulse observed in full scale aircraft crash tests.
The Role of Endocytosis during Morphogenetic Signaling
Gonzalez-Gaitan, Marcos; Jülicher, Frank
2014-01-01
Morphogens are signaling molecules that are secreted by a localized source and spread in a target tissue where they are involved in the regulation of growth and patterning. Both the activity of morphogenetic signaling and the kinetics of ligand spreading in a tissue depend on endocytosis and intracellular trafficking. Here, we review quantitative approaches to study how large-scale morphogen profiles and signals emerge in a tissue from cellular trafficking processes and endocytic pathways. Starting from the kinetics of endosomal networks, we discuss the role of cellular trafficking and receptor dynamics in the formation of morphogen gradients. These morphogen gradients scale during growth, which implies that overall tissue size influences cellular trafficking kinetics. Finally, we discuss how such morphogen profiles can be used to control tissue growth. We emphasize the role of theory in efforts to bridge between scales. PMID:24984777
NASA Technical Reports Server (NTRS)
Lin, P.; Pratt, D. T.
1987-01-01
A hybrid method has been developed for the numerical prediction of turbulent mixing in a spatially-developing, free shear layer. Most significantly, the computation incorporates the effects of large-scale structures, Schmidt number and Reynolds number on mixing, which have been overlooked in the past. In flow field prediction, large-eddy simulation was conducted by a modified 2-D vortex method with subgrid-scale modeling. The predicted mean velocities, shear layer growth rates, Reynolds stresses, and the RMS of longitudinal velocity fluctuations were found to be in good agreement with experiments, although the lateral velocity fluctuations were overpredicted. In scalar transport, the Monte Carlo method was extended to the simulation of the time-dependent pdf transport equation. For the first time, the mixing frequency in Curl's coalescence/dispersion model was estimated by using Broadwell and Breidenthal's theory of micromixing, which involves Schmidt number, Reynolds number and the local vorticity. Numerical tests were performed for a gaseous case and an aqueous case. Evidence that pure freestream fluids are entrained into the layer by large-scale motions was found in the predicted pdf. Mean concentration profiles were found to be insensitive to Schmidt number, while the unmixedness was higher for higher Schmidt number. Applications were made to mixing layers with isothermal, fast reactions. The predicted difference in product thickness of the two cases was in reasonable quantitative agreement with experimental measurements.
CGDV: a webtool for circular visualization of genomics and transcriptomics data.
Jha, Vineet; Singh, Gulzar; Kumar, Shiva; Sonawane, Amol; Jere, Abhay; Anamika, Krishanpal
2017-10-24
Interpretation of large-scale data is very challenging and currently there is scarcity of web tools which support automated visualization of a variety of high throughput genomics and transcriptomics data and for a wide variety of model organisms along with user defined karyotypes. Circular plot provides holistic visualization of high throughput large scale data but it is very complex and challenging to generate as most of the available tools need informatics expertise to install and run them. We have developed CGDV (Circos for Genomics and Transcriptomics Data Visualization), a webtool based on Circos, for seamless and automated visualization of a variety of large scale genomics and transcriptomics data. CGDV takes output of analyzed genomics or transcriptomics data of different formats, such as vcf, bed, xls, tab limited matrix text file, CNVnator raw output and Gene fusion raw output, to plot circular view of the sample data. CGDV take cares of generating intermediate files required for circos. CGDV is freely available at https://cgdv-upload.persistent.co.in/cgdv/ . The circular plot for each data type is tailored to gain best biological insights into the data. The inter-relationship between data points, homologous sequences, genes involved in fusion events, differential expression pattern, sequencing depth, types and size of variations and enrichment of DNA binding proteins can be seen using CGDV. CGDV thus helps biologists and bioinformaticians to visualize a variety of genomics and transcriptomics data seamlessly.
Environmental impacts of large-scale CSP plants in northwestern China.
Wu, Zhiyong; Hou, Anping; Chang, Chun; Huang, Xiang; Shi, Duoqi; Wang, Zhifeng
2014-01-01
Several concentrated solar power demonstration plants are being constructed, and a few commercial plants have been announced in northwestern China. However, the mutual impacts between the concentrated solar power plants and their surrounding environments have not yet been addressed comprehensively in literature by the parties involved in these projects. In China, these projects are especially important as an increasing amount of low carbon electricity needs to be generated in order to maintain the current economic growth while simultaneously lessening pollution. In this study, the authors assess the potential environmental impacts of large-scale concentrated solar power plants. Specifically, the water use intensity, soil erosion and soil temperature are quantitatively examined. It was found that some of the impacts are favorable, while some impacts are negative in relation to traditional power generation techniques and some need further research before they can be reasonably appraised. In quantitative terms, concentrated solar power plants consume about 4000 L MW(-1) h(-1) of water if wet cooling technology is used, and the collectors lead to the soil temperature changes of between 0.5 and 4 °C; however, it was found that the soil erosion is dramatically alleviated. The results of this study are helpful to decision-makers in concentrated solar power site selection and regional planning. Some conclusions of this study are also valid for large-scale photovoltaic plants.
Culture and cognition in health systems change.
Evans, Jenna M; Baker, G Ross; Berta, Whitney; Barnsley, Jan
2015-01-01
Large-scale change involves modifying not only the structures and functions of multiple organizations, but also the mindsets and behaviours of diverse stakeholders. This paper focuses on the latter: the informal, less visible, and often neglected psychological and social factors implicated in change efforts. The purpose of this paper is to differentiate between the concepts of organizational culture and mental models, to argue for the value of applying a shared mental models (SMM) framework to large-scale change, and to suggest directions for future research. The authors provide an overview of SMM theory and use it to explore the dynamic relationship between culture and cognition. The contributions and limitations of the theory to change efforts are also discussed. Culture and cognition are complementary perspectives, providing insight into two different levels of the change process. SMM theory draws attention to important questions that add value to existing perspectives on large-scale change. The authors outline these questions for future research and argue that research and practice in this domain may be best served by focusing less on the potentially narrow goal of "achieving consensus" and more on identifying, understanding, and managing cognitive convergences and divergences as part of broader research and change management programmes. Drawing from both cultural and cognitive paradigms can provide researchers with a more complete picture of the processes by which coordinated action are achieved in complex change initiatives in the healthcare domain.
NASA Technical Reports Server (NTRS)
Valinia, Azita; Moe, Rud; Seery, Bernard D.; Mankins, John C.
2013-01-01
We present a concept for an ISS-based optical system assembly demonstration designed to advance technologies related to future large in-space optical facilities deployment, including space solar power collectors and large-aperture astronomy telescopes. The large solar power collector problem is not unlike the large astronomical telescope problem, but at least conceptually it should be easier in principle, given the tolerances involved. We strive in this application to leverage heavily the work done on the NASA Optical Testbed Integration on ISS Experiment (OpTIIX) effort to erect a 1.5 m imaging telescope on the International Space Station (ISS). Specifically, we examine a robotic assembly sequence for constructing a large (meter diameter) slightly aspheric or spherical primary reflector, comprised of hexagonal mirror segments affixed to a lightweight rigidizing backplane structure. This approach, together with a structured robot assembler, will be shown to be scalable to the area and areal densities required for large-scale solar concentrator arrays.
NASA Technical Reports Server (NTRS)
Basili, V. R.; Zelkowitz, M. V.
1978-01-01
In a brief evaluation of software-related considerations, it is found that suitable approaches for software development depend to a large degree on the characteristics of the particular project involved. An analysis is conducted of development problems in an environment in which ground support software is produced for spacecraft control. The amount of work involved is in the range from 6 to 10 man-years. Attention is given to a general project summary, a programmer/analyst survey, a component summary, a component status report, a resource summary, a change report, a computer program run analysis, aspects of data collection on a smaller scale, progress forecasting, problems of overhead, and error analysis.
Piton, Amélie; Redin, Claire; Mandel, Jean-Louis
2013-01-01
Because of the unbalanced sex ratio (1.3–1.4 to 1) observed in intellectual disability (ID) and the identification of large ID-affected families showing X-linked segregation, much attention has been focused on the genetics of X-linked ID (XLID). Mutations causing monogenic XLID have now been reported in over 100 genes, most of which are commonly included in XLID diagnostic gene panels. Nonetheless, the boundary between true mutations and rare non-disease-causing variants often remains elusive. The sequencing of a large number of control X chromosomes, required for avoiding false-positive results, was not systematically possible in the past. Such information is now available thanks to large-scale sequencing projects such as the National Heart, Lung, and Blood (NHLBI) Exome Sequencing Project, which provides variation information on 10,563 X chromosomes from the general population. We used this NHLBI cohort to systematically reassess the implication of 106 genes proposed to be involved in monogenic forms of XLID. We particularly question the implication in XLID of ten of them (AGTR2, MAGT1, ZNF674, SRPX2, ATP6AP2, ARHGEF6, NXF5, ZCCHC12, ZNF41, and ZNF81), in which truncating variants or previously published mutations are observed at a relatively high frequency within this cohort. We also highlight 15 other genes (CCDC22, CLIC2, CNKSR2, FRMPD4, HCFC1, IGBP1, KIAA2022, KLF8, MAOA, NAA10, NLGN3, RPL10, SHROOM4, ZDHHC15, and ZNF261) for which replication studies are warranted. We propose that similar reassessment of reported mutations (and genes) with the use of data from large-scale human exome sequencing would be relevant for a wide range of other genetic diseases. PMID:23871722
Murray, A.B.; Thieler, E.R.
2004-01-01
Recent observations of inner continental shelves in many regions show numerous collections of relatively coarse sediment, which extend kilometers in the cross-shore direction and are on the order of 100m wide. These "rippled scour depressions" have been interpreted to indicate concentrated cross-shelf currents. However, recent observations strongly suggest that they are associated with sediment transport along-shore rather than cross-shore. A new hypothesis for the origin of these features involves the large wave-generated ripples that form in the coarse material. Wave motions interacting with these large roughness elements generate near-bed turbulence that is greatly enhanced relative to that in other areas. This enhances entrainment and inhibits settling of fine material in an area dominated by coarse sediment. The fine sediment is then carried by mean currents past the coarse accumulations, and deposited where the bed is finer. We hypothesize that these interactions constitute a feedback tending to produce accumulations of fine material separated by self-perpetuating patches of coarse sediments. As with many types of self-organized bedforms, small features would interact as they migrate, leading to a better-organized, larger-scale pattern. As an initial test of this hypothesis, we use a numerical model treating the transport of coarse and fine sediment fractions, treated as functions of the local bed composition - a proxy for the presence of large roughness elements in coarse areas. Large-scale sorted patterns exhibiting the main characteristics of the natural features result robustly in the model, indicating that this new hypothesis offers a plausible explanation for the phenomena. ?? 2003 Elsevier Ltd. All rights reserved.
Double inflation - A possible resolution of the large-scale structure problem
NASA Technical Reports Server (NTRS)
Turner, Michael S.; Villumsen, Jens V.; Vittorio, Nicola; Silk, Joseph; Juszkiewicz, Roman
1987-01-01
A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Omega = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of about 100 Mpc, while the small-scale structure over less than about 10 Mpc resembles that in a low-density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations.
An Overview of NASA Efforts on Zero Boiloff Storage of Cryogenic Propellants
NASA Technical Reports Server (NTRS)
Hastings, Leon J.; Plachta, D. W.; Salerno, L.; Kittel, P.; Haynes, Davy (Technical Monitor)
2001-01-01
Future mission planning within NASA has increasingly motivated consideration of cryogenic propellant storage durations on the order of years as opposed to a few weeks or months. Furthermore, the advancement of cryocooler and passive insulation technologies in recent years has substantially improved the prospects for zero boiloff storage of cryogenics. Accordingly, a cooperative effort by NASA's Ames Research Center (ARC), Glenn Research Center (GRC), and Marshall Space Flight Center (MSFC) has been implemented to develop and demonstrate "zero boiloff" concepts for in-space storage of cryogenic propellants, particularly liquid hydrogen and oxygen. ARC is leading the development of flight-type cryocoolers, GRC the subsystem development and small scale testing, and MSFC the large scale and integrated system level testing. Thermal and fluid modeling involves a combined effort by the three Centers. Recent accomplishments include: 1) development of "zero boiloff" analytical modeling techniques for sizing the storage tankage, passive insulation, cryocooler, power source mass, and radiators; 2) an early subscale demonstration with liquid hydrogen 3) procurement of a flight-type 10 watt, 95 K pulse tube cryocooler for liquid oxygen storage and 4) assembly of a large-scale test article for an early demonstration of the integrated operation of passive insulation, destratification/pressure control, and cryocooler (commercial unit) subsystems to achieve zero boiloff storage of liquid hydrogen. Near term plans include the large-scale integrated system demonstration testing this summer, subsystem testing of the flight-type pulse-tube cryocooler with liquid nitrogen (oxygen simulant), and continued development of a flight-type liquid hydrogen pulse tube cryocooler.
The large-scale removal of mammalian invasive alien species in Northern Europe.
Robertson, Peter A; Adriaens, Tim; Lambin, Xavier; Mill, Aileen; Roy, Sugoto; Shuttleworth, Craig M; Sutton-Croft, Mike
2017-02-01
Numerous examples exist of successful mammalian invasive alien species (IAS) eradications from small islands (<10 km 2 ), but few from more extensive areas. We review 15 large-scale removals (mean area 2627 km 2 ) from Northern Europe since 1900, including edible dormouse, muskrat, coypu, Himalayan porcupine, Pallas' and grey squirrels and American mink, each primarily based on daily checking of static traps. Objectives included true eradication or complete removal to a buffer zone, as distinct from other programmes that involved local control to limit damage or spread. Twelve eradication/removal programmes (80%) were successful. Cost increased with and was best predicted by area, while the cost per unit area decreased; the number of individual animals removed did not add significantly to the model. Doubling the area controlled reduced cost per unit area by 10%, but there was no evidence that cost effectiveness had increased through time. Compared with small islands, larger-scale programmes followed similar patterns of effort in relation to area. However, they brought challenges when defining boundaries and consequent uncertainties around costs, the definition of their objectives, confirmation of success and different considerations for managing recolonisation. Novel technologies or increased use of volunteers may reduce costs. Rapid response to new incursions is recommended as best practice rather than large-scale control to reduce the environmental, financial and welfare costs. © 2016 Crown copyright. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. © 2016 Crown copyright. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Activity-Based Introductory Physics Reform *
NASA Astrophysics Data System (ADS)
Thornton, Ronald
2004-05-01
Physics education research has shown that learning environments that engage students and allow them to take an active part in their learning can lead to large conceptual gains compared to those of good traditional instruction. Examples of successful curricula and methods include Peer Instruction, Just in Time Teaching, RealTime Physics, Workshop Physics, Scale-Up, and Interactive Lecture Demonstrations (ILDs). RealTime Physics promotes interaction among students in a laboratory setting and makes use of powerful real-time data logging tools to teach concepts as well as quantitative relationships. An active learning environment is often difficult to achieve in large lecture sessions and Workshop Physics and Scale-Up largely eliminate lectures in favor of collaborative student activities. Peer Instruction, Just in Time Teaching, and Interactive Lecture Demonstrations (ILDs) make lectures more interactive in complementary ways. This presentation will introduce these reforms and use Interactive Lecture Demonstrations (ILDs) with the audience to illustrate the types of curricula and tools used in the curricula above. ILDs make use real experiments, real-time data logging tools and student interaction to create an active learning environment in large lecture classes. A short video of students involved in interactive lecture demonstrations will be shown. The results of research studies at various institutions to measure the effectiveness of these methods will be presented.
Reconnecting fragmented sturgeon populations in North American rivers
Jager, Yetta; Forsythe, Patrick S.; McLaughlin, Robert L.; ...
2016-02-24
The majority of large North American rivers are fragmented by dams that interrupt migrations of wide-ranging fishes like sturgeons. Reconnecting habitat is viewed as an important means of protecting sturgeon species in U.S. rivers because these species have lost between 5% and 60% of their historical ranges. Unfortunately, facilities designed to pass other fishes have rarely worked well for sturgeons. The most successful passage facilities were sized appropriately for sturgeons and accommodated bottom-oriented species. For upstream passage, facilities with large entrances, full-depth guidance systems, large lifts, or wide fishways without obstructions or tight turns worked well. However, facilitating upstream migrationmore » is only half the battle. Broader recovery for linked sturgeon populations requires safe round-trip passage involving multiple dams. The most successful downstream passage facilities included nature-like fishways, large canal bypasses, and bottom-draw sluice gates. We outline an adaptive approach to implementing passage that begins with temporary programs and structures and monitors success both at the scale of individual fish at individual dams and the scale of metapopulations in a river basin. The challenge will be to learn from past efforts and reconnect North American sturgeon populations in a way that promotes range expansion and facilitates population recovery.« less
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Magnetic Reconnection and Particle Acceleration in the Solar Corona
NASA Astrophysics Data System (ADS)
Neukirch, Thomas
Reconnection plays a major role for the magnetic activity of the solar atmosphere, for example solar flares. An interesting open problem is how magnetic reconnection acts to redistribute the stored magnetic energy released during an eruption into other energy forms, e.g. gener-ating bulk flows, plasma heating and non-thermal energetic particles. In particular, finding a theoretical explanation for the observed acceleration of a large number of charged particles to high energies during solar flares is presently one of the most challenging problems in solar physics. One difficulty is the vast difference between the microscopic (kinetic) and the macro-scopic (MHD) scales involved. Whereas the phenomena observed to occur on large scales are reasonably well explained by the so-called standard model, this does not seem to be the case for the small-scale (kinetic) aspects of flares. Over the past years, observations, in particular by RHESSI, have provided evidence that a naive interpretation of the data in terms of the standard solar flare/thick target model is problematic. As a consequence, the role played by magnetic reconnection in the particle acceleration process during solar flares may have to be reconsidered.
NASA Technical Reports Server (NTRS)
Debussche, A.; Dubois, T.; Temam, R.
1993-01-01
Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.
RNA–protein binding kinetics in an automated microfluidic reactor
Ridgeway, William K.; Seitaridou, Effrosyni; Phillips, Rob; Williamson, James R.
2009-01-01
Microfluidic chips can automate biochemical assays on the nanoliter scale, which is of considerable utility for RNA–protein binding reactions that would otherwise require large quantities of proteins. Unfortunately, complex reactions involving multiple reactants cannot be prepared in current microfluidic mixer designs, nor is investigation of long-time scale reactions possible. Here, a microfluidic ‘Riboreactor’ has been designed and constructed to facilitate the study of kinetics of RNA–protein complex formation over long time scales. With computer automation, the reactor can prepare binding reactions from any combination of eight reagents, and is optimized to monitor long reaction times. By integrating a two-photon microscope into the microfluidic platform, 5-nl reactions can be observed for longer than 1000 s with single-molecule sensitivity and negligible photobleaching. Using the Riboreactor, RNA–protein binding reactions with a fragment of the bacterial 30S ribosome were prepared in a fully automated fashion and binding rates were consistent with rates obtained from conventional assays. The microfluidic chip successfully combines automation, low sample consumption, ultra-sensitive fluorescence detection and a high degree of reproducibility. The chip should be able to probe complex reaction networks describing the assembly of large multicomponent RNPs such as the ribosome. PMID:19759214
Extraction of drainage networks from large terrain datasets using high throughput computing
NASA Astrophysics Data System (ADS)
Gong, Jianya; Xie, Jibo
2009-02-01
Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.
LARGE-SCALE ISOLATION AND FRACTIONATION OF ORGANS OF DROSOPHILA MELANOGASTER LARVAE
Zweidler, Alfred; Cohen, Leonard H.
1971-01-01
Methods for the mass isolation of diverse organs from small animals are described. They involve novel devices: a mechanical dissecting system, a centrifugal agitator for the separation of fibrillar from globular particles, and a settling chamber for the fractionation at unit gravity of particles with sedimentation velocities above the useful range for centrifugation. The application of these methods to the isolation of polytene and nonpolytene nuclei from Drosophila melanogaster larvae is described. PMID:5000070
Angular default mode network connectivity across working memory load.
Vatansever, D; Manktelow, A E; Sahakian, B J; Menon, D K; Stamatakis, E A
2017-01-01
Initially identified during no-task, baseline conditions, it has now been suggested that the default mode network (DMN) engages during a variety of working memory paradigms through its flexible interactions with other large-scale brain networks. Nevertheless, its contribution to whole-brain connectivity dynamics across increasing working memory load has not been explicitly assessed. The aim of our study was to determine which DMN hubs relate to working memory task performance during an fMRI-based n-back paradigm with parametric increases in difficulty. Using a voxel-wise metric, termed the intrinsic connectivity contrast (ICC), we found that the bilateral angular gyri (core DMN hubs) displayed the greatest change in global connectivity across three levels of n-back task load. Subsequent seed-based functional connectivity analysis revealed that the angular DMN regions robustly interact with other large-scale brain networks, suggesting a potential involvement in the global integration of information. Further support for this hypothesis comes from the significant correlations we found between angular gyri connectivity and reaction times to correct responses. The implication from our study is that the DMN is actively involved during the n-back task and thus plays an important role related to working memory, with its core angular regions contributing to the changes in global brain connectivity in response to increasing environmental demands. Hum Brain Mapp 38:41-52, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Dickman, Amy J.; Macdonald, Ewan A.; Macdonald, David W.
2011-01-01
One of the greatest challenges in biodiversity conservation today is how to facilitate protection of species that are highly valued at a global scale but have little or even negative value at a local scale. Imperiled species such as large predators can impose significant economic costs at a local level, often in poverty-stricken rural areas where households are least able to tolerate such costs, and impede efforts of local people, especially traditional pastoralists, to escape from poverty. Furthermore, the costs and benefits involved in predator conservation often include diverse dimensions, which are hard to quantify and nearly impossible to reconcile with one another. The best chance of effective conservation relies upon translating the global value of carnivores into tangible local benefits large enough to drive conservation “on the ground.” Although human–carnivore coexistence involves significant noneconomic values, providing financial incentives to those affected negatively by carnivore presence is a common strategy for encouraging such coexistence, and this can also have important benefits in terms of reducing poverty. Here, we provide a critical overview of such financial instruments, which we term “payments to encourage coexistence”; assess the pitfalls and potentials of these methods, particularly compensation and insurance, revenue-sharing, and conservation payments; and discuss how existing strategies of payment to encourage coexistence could be combined to facilitate carnivore conservation and alleviate local poverty. PMID:21873181
Losing protein in the brain: the case of progranulin.
Ghidoni, Roberta; Paterlini, Anna; Albertini, Valentina; Binetti, Giuliano; Benussi, Luisa
2012-10-02
It is well known that progranulin protein is involved in wound repair, inflammation, and tumor formation. The wedding between progranulin and brain was celebrated in 2006 with the involvement of progranulin gene (GRN) in Frontotemporal lobar degeneration (FTLD), the most common form of early-onset dementia: up to date, 75 mutations have been detected in FTLD patients as well as in patients with widely variable clinical phenotypes. All pathogenic GRN mutations identified thus far cause the disease through a uniform mechanism, i.e. loss of functional progranulin or haploinsufficiency. Studies on GRN knockout mice suggest that progranulin-related neurodegenerative diseases may result from lifetime depletion of neurotrophic support together with cumulative damage in association with dysregulated inflammation, thus highlighting possible new molecular targets for GRN-related FTLD treatment. Recently, the dosage of plasma progranulin has been proposed as a useful tool for a quick and inexpensive large-scale screening of affected and unaffected carriers of GRN mutations. Before it is systematically translated into clinical practice and, more importantly, included into diagnostic criteria for dementias, further standardization of plasma progranulin test and harmonization of its use are required. Once a specific treatment becomes available for these pathologies, this test - being applicable on large scale - will represent an important step towards personalized healthcare. This article is part of a Special Issue entitled: Brain Integration. Copyright © 2012 Elsevier B.V. All rights reserved.
Dickman, Amy J; Macdonald, Ewan A; Macdonald, David W
2011-08-23
One of the greatest challenges in biodiversity conservation today is how to facilitate protection of species that are highly valued at a global scale but have little or even negative value at a local scale. Imperiled species such as large predators can impose significant economic costs at a local level, often in poverty-stricken rural areas where households are least able to tolerate such costs, and impede efforts of local people, especially traditional pastoralists, to escape from poverty. Furthermore, the costs and benefits involved in predator conservation often include diverse dimensions, which are hard to quantify and nearly impossible to reconcile with one another. The best chance of effective conservation relies upon translating the global value of carnivores into tangible local benefits large enough to drive conservation "on the ground." Although human-carnivore coexistence involves significant noneconomic values, providing financial incentives to those affected negatively by carnivore presence is a common strategy for encouraging such coexistence, and this can also have important benefits in terms of reducing poverty. Here, we provide a critical overview of such financial instruments, which we term "payments to encourage coexistence"; assess the pitfalls and potentials of these methods, particularly compensation and insurance, revenue-sharing, and conservation payments; and discuss how existing strategies of payment to encourage coexistence could be combined to facilitate carnivore conservation and alleviate local poverty.
Evaluating a collaborative IT based research and development project.
Khan, Zaheer; Ludlow, David; Caceres, Santiago
2013-10-01
In common with all projects, evaluating an Information Technology (IT) based research and development project is necessary in order to discover whether or not the outcomes of the project are successful. However, evaluating large-scale collaborative projects is especially difficult as: (i) stakeholders from different countries are involved who, almost inevitably, have diverse technological and/or application domain backgrounds and objectives; (ii) multiple and sometimes conflicting application specific and user-defined requirements exist; and (iii) multiple and often conflicting technological research and development objectives are apparent. In this paper, we share our experiences based on the large-scale integrated research project - The HUMBOLDT project - with project duration of 54 months, involving contributions from 27 partner organisations, plus 4 sub-contractors from 14 different European countries. In the HUMBOLDT project, a specific evaluation methodology was defined and utilised for the user evaluation of the project outcomes. The user evaluation performed on the HUMBOLDT Framework and its associated nine application scenarios from various application domains, resulted in not only an evaluation of the integrated project, but also revealed the benefits and disadvantages of the evaluation methodology. This paper presents the evaluation methodology, discusses in detail the process of applying it to the HUMBOLDT project and provides an in-depth analysis of the results, which can be usefully applied to other collaborative research projects in a variety of domains. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cognitive, affective, and conative theory of mind (ToM) in children with traumatic brain injury.
Dennis, Maureen; Simic, Nevena; Bigler, Erin D; Abildskov, Tracy; Agostino, Alba; Taylor, H Gerry; Rubin, Kenneth; Vannatta, Kathryn; Gerhardt, Cynthia A; Stancin, Terry; Yeates, Keith Owen
2013-07-01
We studied three forms of dyadic communication involving theory of mind (ToM) in 82 children with traumatic brain injury (TBI) and 61 children with orthopedic injury (OI): Cognitive (concerned with false belief), Affective (concerned with expressing socially deceptive facial expressions), and Conative (concerned with influencing another's thoughts or feelings). We analyzed the pattern of brain lesions in the TBI group and conducted voxel-based morphometry for all participants in five large-scale functional brain networks, and related lesion and volumetric data to ToM outcomes. Children with TBI exhibited difficulty with Cognitive, Affective, and Conative ToM. The perturbation threshold for Cognitive ToM is higher than that for Affective and Conative ToM, in that Severe TBI disturbs Cognitive ToM but even Mild-Moderate TBI disrupt Affective and Conative ToM. Childhood TBI was associated with damage to all five large-scale brain networks. Lesions in the Mirror Neuron Empathy network predicted lower Conative ToM involving ironic criticism and empathic praise. Conative ToM was significantly and positively related to the package of Default Mode, Central Executive, and Mirror Neuron Empathy networks and, more specifically, to two hubs of the Default Mode Network, the posterior cingulate/retrosplenial cortex and the hippocampal formation, including entorhinal cortex and parahippocampal cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.
Larsen, David A; Winters, Anna; Cheelo, Sanford; Hamainza, Busiku; Kamuliwo, Mulakwa; Miller, John M; Bridges, Daniel J
2017-11-02
Malaria is a significant burden to health systems and is responsible for a large proportion of outpatient cases at health facilities in endemic regions. The scale-up of community management of malaria and reactive case detection likely affect both malaria cases and outpatient attendance at health facilities. Using health management information data from 2012 to 2013 this article examines health trends before and after the training of volunteer community health workers to test and treat malaria cases in Southern Province, Zambia. An estimated 50% increase in monthly reported malaria infections was found when community health workers were involved with malaria testing and treating in the community (incidence rate ratio 1.52, p < 0.001). Furthermore, an estimated 6% decrease in outpatient attendance at the health facility was found when community health workers were involved with malaria testing and treating in the community. These results suggest a large public health benefit to both community case management of malaria and reactive case detection. First, the capacity of the malaria surveillance system to identify malaria infections was increased by nearly one-third. Second, the outpatient attendance at health facilities was modestly decreased. Expanding the capacity of the malaria surveillance programme through systems such as community case management and reactive case detection is an important step toward malaria elimination.
Neural networks supporting audiovisual integration for speech: A large-scale lesion study.
Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius
2018-06-01
Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.
Addressing Criticisms of Large-Scale Marine Protected Areas.
O'Leary, Bethan C; Ban, Natalie C; Fernandez, Miriam; Friedlander, Alan M; García-Borboroglu, Pablo; Golbuu, Yimnang; Guidetti, Paolo; Harris, Jean M; Hawkins, Julie P; Langlois, Tim; McCauley, Douglas J; Pikitch, Ellen K; Richmond, Robert H; Roberts, Callum M
2018-05-01
Designated large-scale marine protected areas (LSMPAs, 100,000 or more square kilometers) constitute over two-thirds of the approximately 6.6% of the ocean and approximately 14.5% of the exclusive economic zones within marine protected areas. Although LSMPAs have received support among scientists and conservation bodies for wilderness protection, regional ecological connectivity, and improving resilience to climate change, there are also concerns. We identified 10 common criticisms of LSMPAs along three themes: (1) placement, governance, and management; (2) political expediency; and (3) social-ecological value and cost. Through critical evaluation of scientific evidence, we discuss the value, achievements, challenges, and potential of LSMPAs in these arenas. We conclude that although some criticisms are valid and need addressing, none pertain exclusively to LSMPAs, and many involve challenges ubiquitous in management. We argue that LSMPAs are an important component of a diversified management portfolio that tempers potential losses, hedges against uncertainty, and enhances the probability of achieving sustainably managed oceans.
Computational catalyst screening: Scaling, bond-order and catalysis
Abild-Pedersen, Frank
2015-10-01
Here, the design of new and better heterogeneous catalysts needed to accommodate the growing demand for energy from renewable sources is an important challenge for coming generations. Most surface catalyzed processes involve a large number of complex reaction networks and the energetics ultimately defines the turn-over-frequency and the selectivity of the process. In order not to get lost in the large quantities of data, simplification schemes that still contain the key elements of the reaction are required. Adsorption and transition state scaling relations constitutes such a scheme that not only maps the reaction relevant information in terms of few parametersmore » but also provides an efficient way of screening for new materials in a continuous multi-dimensional energy space. As with all relations they impose certain restrictions on what can be achieved and in this paper, I show why these limitations exist and how we can change the behavior through an energy-resolved approach that still maintains the screening capabilities needed in computational catalysis.« less
Willems, Sara M; Wright, Daniel J; Day, Felix R; Trajanoska, Katerina; Joshi, Peter K; Morris, John A; Matteini, Amy M; Garton, Fleur C; Grarup, Niels; Oskolkov, Nikolay; Thalamuthu, Anbupalam; Mangino, Massimo; Liu, Jun; Demirkan, Ayse; Lek, Monkol; Xu, Liwen; Wang, Guan; Oldmeadow, Christopher; Gaulton, Kyle J; Lotta, Luca A; Miyamoto-Mikami, Eri; Rivas, Manuel A; White, Tom; Loh, Po-Ru; Aadahl, Mette; Amin, Najaf; Attia, John R; Austin, Krista; Benyamin, Beben; Brage, Søren; Cheng, Yu-Ching; Cięszczyk, Paweł; Derave, Wim; Eriksson, Karl-Fredrik; Eynon, Nir; Linneberg, Allan; Lucia, Alejandro; Massidda, Myosotis; Mitchell, Braxton D; Miyachi, Motohiko; Murakami, Haruka; Padmanabhan, Sandosh; Pandey, Ashutosh; Papadimitriou, Ioannis; Rajpal, Deepak K; Sale, Craig; Schnurr, Theresia M; Sessa, Francesco; Shrine, Nick; Tobin, Martin D; Varley, Ian; Wain, Louise V; Wray, Naomi R; Lindgren, Cecilia M; MacArthur, Daniel G; Waterworth, Dawn M; McCarthy, Mark I; Pedersen, Oluf; Khaw, Kay-Tee; Kiel, Douglas P; Pitsiladis, Yannis; Fuku, Noriyuki; Franks, Paul W; North, Kathryn N; van Duijn, Cornelia M; Mather, Karen A; Hansen, Torben; Hansson, Ola; Spector, Tim; Murabito, Joanne M; Richards, J Brent; Rivadeneira, Fernando; Langenberg, Claudia; Perry, John R B; Wareham, Nick J; Scott, Robert A
2017-07-12
Hand grip strength is a widely used proxy of muscular fitness, a marker of frailty, and predictor of a range of morbidities and all-cause mortality. To investigate the genetic determinants of variation in grip strength, we perform a large-scale genetic discovery analysis in a combined sample of 195,180 individuals and identify 16 loci associated with grip strength (P<5 × 10 -8 ) in combined analyses. A number of these loci contain genes implicated in structure and function of skeletal muscle fibres (ACTG1), neuronal maintenance and signal transduction (PEX14, TGFA, SYT1), or monogenic syndromes with involvement of psychomotor impairment (PEX14, LRPPRC and KANSL1). Mendelian randomization analyses are consistent with a causal effect of higher genetically predicted grip strength on lower fracture risk. In conclusion, our findings provide new biological insight into the mechanistic underpinnings of grip strength and the causal role of muscular strength in age-related morbidities and mortality.
A visualization tool to support decision making in environmental and biological planning
Romañach, Stephanie S.; McKelvy, James M.; Conzelmann, Craig; Suir, Kevin J.
2014-01-01
Large-scale ecosystem management involves consideration of many factors for informed decision making. The EverVIEW Data Viewer is a cross-platform desktop decision support tool to help decision makers compare simulation model outputs from competing plans for restoring Florida's Greater Everglades. The integration of NetCDF metadata conventions into EverVIEW allows end-users from multiple institutions within and beyond the Everglades restoration community to share information and tools. Our development process incorporates continuous interaction with targeted end-users for increased likelihood of adoption. One of EverVIEW's signature features is side-by-side map panels, which can be used to simultaneously compare species or habitat impacts from alternative restoration plans. Other features include examination of potential restoration plan impacts across multiple geographic or tabular displays, and animation through time. As a result of an iterative, standards-driven approach, EverVIEW is relevant to large-scale planning beyond Florida, and is used in multiple biological planning efforts in the United States.
Testing the robustness of Citizen Science projects: Evaluating the results of pilot project COMBER
Faulwetter, Sarah; Dailianis, Thanos; Smith, Vincent Stuart; Koulouri, Panagiota; Dounas, Costas; Arvanitidis, Christos
2016-01-01
Abstract Background Citizen Science (CS) as a term implies a great deal of approaches and scopes involving many different fields of science. The number of the relevant projects globally has been increased significantly in the recent years. Large scale ecological questions can be answered only through extended observation networks and CS projects can support this effort. Although the need of such projects is apparent, an important part of scientific community cast doubt on the reliability of CS data sets. New information The pilot CS project COMBER has been created in order to provide evidence to answer the aforementioned question in the coastal marine biodiversity monitoring. The results of the current analysis show that a carefully designed CS project with clear hypotheses, wide participation and data sets validation, can be a valuable tool for the large scale and long term changes in marine biodiversity pattern change and therefore for relevant management and conservation issues. PMID:28174507
Addressing Criticisms of Large-Scale Marine Protected Areas
Ban, Natalie C; Fernandez, Miriam; Friedlander, Alan M; García-Borboroglu, Pablo; Golbuu, Yimnang; Guidetti, Paolo; Harris, Jean M; Hawkins, Julie P; Langlois, Tim; McCauley, Douglas J; Pikitch, Ellen K; Richmond, Robert H; Roberts, Callum M
2018-01-01
Abstract Designated large-scale marine protected areas (LSMPAs, 100,000 or more square kilometers) constitute over two-thirds of the approximately 6.6% of the ocean and approximately 14.5% of the exclusive economic zones within marine protected areas. Although LSMPAs have received support among scientists and conservation bodies for wilderness protection, regional ecological connectivity, and improving resilience to climate change, there are also concerns. We identified 10 common criticisms of LSMPAs along three themes: (1) placement, governance, and management; (2) political expediency; and (3) social–ecological value and cost. Through critical evaluation of scientific evidence, we discuss the value, achievements, challenges, and potential of LSMPAs in these arenas. We conclude that although some criticisms are valid and need addressing, none pertain exclusively to LSMPAs, and many involve challenges ubiquitous in management. We argue that LSMPAs are an important component of a diversified management portfolio that tempers potential losses, hedges against uncertainty, and enhances the probability of achieving sustainably managed oceans. PMID:29731514
Transitional behavior of different energy protons based on Van Allen Probes observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Chao; Bortnik, Jacob; Chen, Lunjin
Understanding the dynamical behavior of ~1 eV to 50 keV ions and identifying the energies at which the morphologies transit are important in that they involve the relative intensities and distributions of the large-scale electric and magnetic fields, the outflow, and recombination rates. However, there have been only few direct observational investigations of the transition in drift behaviors of different energy ions before the Van Allen Probes era. In this paper, we statistically analyze ~1 eV to 50 keV hydrogen (H +) differential flux distributions near geomagnetic equator by using Van Allen Probes observations to investigate the H + dynamicsmore » under the regulation of large-scale electric and magnetic fields. Our survey clearly indicates three types of H + behaviors within different energy ranges, which is consistent with previous theory predictions. Finally, using simple electric and magnetic field models in UBK coordinates, we have further constrained the source regions of different energy ions and their drift directions.« less
Asynchronous adaptive time step in quantitative cellular automata modeling
Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan
2004-01-01
Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901
Transitional behavior of different energy protons based on Van Allen Probes observations
Yue, Chao; Bortnik, Jacob; Chen, Lunjin; ...
2016-12-09
Understanding the dynamical behavior of ~1 eV to 50 keV ions and identifying the energies at which the morphologies transit are important in that they involve the relative intensities and distributions of the large-scale electric and magnetic fields, the outflow, and recombination rates. However, there have been only few direct observational investigations of the transition in drift behaviors of different energy ions before the Van Allen Probes era. In this paper, we statistically analyze ~1 eV to 50 keV hydrogen (H +) differential flux distributions near geomagnetic equator by using Van Allen Probes observations to investigate the H + dynamicsmore » under the regulation of large-scale electric and magnetic fields. Our survey clearly indicates three types of H + behaviors within different energy ranges, which is consistent with previous theory predictions. Finally, using simple electric and magnetic field models in UBK coordinates, we have further constrained the source regions of different energy ions and their drift directions.« less